首页 > 最新文献

2020 25th International Conference on Pattern Recognition (ICPR)最新文献

英文 中文
Object Detection Model Based on Scene-Level Region Proposal Self-Attention 基于场景级区域建议自关注的目标检测模型
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412726
Yu Quan, Zhixin Li, Canlong Zhang, Huifang Ma
In order to improve the performance of two-stage object detection and consider the importance of scene and semantic information for visual recognition, the neural network of object detection algorithm is studied and analyzed in this paper. The main research work of this paper includes: A scene level region proposal self-attention object detection model based on depth separable convolution is proposed. In order to obtain stronger semantic information and context information of the target scene, the scene-level region proposal self-attention module is reconstructed based on the process of region proposal recognition. The feature map of the output feature pyramid network is sent into three parallel branches: semantic segmentation module, candidate area network module and region proposal self-attention module. At the same time, for the overall performance of the model, a deep separable convolutional network module is constructed on the backbone network, which includes six stages. In the fifth to sixth stage of the network, the separable convolutional network module is integrated respectively. Finally, a object detection method based on border regression network enhancement is proposed to achieve accurate target location. In order to verify the effectiveness of each model, the experimental results of each model are analyzed.
为了提高两阶段目标检测的性能,考虑到场景信息和语义信息对视觉识别的重要性,本文对目标检测算法的神经网络进行了研究和分析。本文的主要研究工作包括:提出了一种基于深度可分卷积的场景级区域自注意目标检测模型。为了获得更强的目标场景语义信息和上下文信息,在区域建议识别过程的基础上重构了场景级区域建议自关注模块。将输出的特征金字塔网络的特征映射分为三个并行分支:语义分割模块、候选区域网络模块和区域建议自关注模块。同时,为了提高模型的整体性能,在骨干网上构建了深度可分卷积网络模块,该模块包括六个阶段。在网络的第五到第六阶段,分别对可分离的卷积网络模块进行集成。最后,提出了一种基于边界回归网络增强的目标检测方法,实现了目标的精确定位。为了验证各模型的有效性,对各模型的实验结果进行了分析。
{"title":"Object Detection Model Based on Scene-Level Region Proposal Self-Attention","authors":"Yu Quan, Zhixin Li, Canlong Zhang, Huifang Ma","doi":"10.1109/ICPR48806.2021.9412726","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412726","url":null,"abstract":"In order to improve the performance of two-stage object detection and consider the importance of scene and semantic information for visual recognition, the neural network of object detection algorithm is studied and analyzed in this paper. The main research work of this paper includes: A scene level region proposal self-attention object detection model based on depth separable convolution is proposed. In order to obtain stronger semantic information and context information of the target scene, the scene-level region proposal self-attention module is reconstructed based on the process of region proposal recognition. The feature map of the output feature pyramid network is sent into three parallel branches: semantic segmentation module, candidate area network module and region proposal self-attention module. At the same time, for the overall performance of the model, a deep separable convolutional network module is constructed on the backbone network, which includes six stages. In the fifth to sixth stage of the network, the separable convolutional network module is integrated respectively. Finally, a object detection method based on border regression network enhancement is proposed to achieve accurate target location. In order to verify the effectiveness of each model, the experimental results of each model are analyzed.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"42 1","pages":"954-961"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87191731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Noise Injection for Training Stochastic Student Networks from Deterministic Teachers 自适应噪声注入训练确定性教师随机学生网络
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412385
Y. Tan, Y. Elovici, A. Binder
Adversarial attacks have been a prevalent problem causing misclassification in machine learning models, with stochasticity being a promising direction towards greater robustness. However, stochastic networks frequently underperform compared to deterministic deep networks. In this work, we present a conceptually clear adaptive noise injection mechanism in combination with teacher-initialisation, which adjusts its degree of randomness dynamically through the computation of mini-batch statistics. This mechanism is embedded within a simple framework to obtain stochastic networks from existing deterministic networks. Our experiments show that our method is able to outperform prior baselines under white-box settings, exemplified through CIFAR-10 and CIFAR-100. Following which, we perform in-depth analysis on varying different components of training with our approach on the effects of robustness and accuracy, through the study of the evolution of decision boundary and trend curves of clean accuracy/attack success over differing degrees of stochasticity. We also shed light on the effects of adversarial training on a pre-trained network, through the lens of decision boundaries.
对抗性攻击一直是导致机器学习模型错误分类的普遍问题,随机性是实现更强鲁棒性的有希望的方向。然而,与确定性深度网络相比,随机网络经常表现不佳。在这项工作中,我们提出了一种概念清晰的自适应噪声注入机制,结合教师初始化,通过计算小批量统计动态调整其随机性程度。该机制嵌入在一个简单的框架中,从现有的确定性网络中获得随机网络。我们的实验表明,我们的方法能够在白盒设置下优于先前的基线,例如CIFAR-10和CIFAR-100。接下来,我们通过研究决策边界的演变和clean准确率/攻击成功率在不同随机程度上的趋势曲线,对不同训练成分对鲁棒性和准确性的影响进行了深入分析。我们还通过决策边界的视角,阐明了对抗性训练对预训练网络的影响。
{"title":"Adaptive Noise Injection for Training Stochastic Student Networks from Deterministic Teachers","authors":"Y. Tan, Y. Elovici, A. Binder","doi":"10.1109/ICPR48806.2021.9412385","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412385","url":null,"abstract":"Adversarial attacks have been a prevalent problem causing misclassification in machine learning models, with stochasticity being a promising direction towards greater robustness. However, stochastic networks frequently underperform compared to deterministic deep networks. In this work, we present a conceptually clear adaptive noise injection mechanism in combination with teacher-initialisation, which adjusts its degree of randomness dynamically through the computation of mini-batch statistics. This mechanism is embedded within a simple framework to obtain stochastic networks from existing deterministic networks. Our experiments show that our method is able to outperform prior baselines under white-box settings, exemplified through CIFAR-10 and CIFAR-100. Following which, we perform in-depth analysis on varying different components of training with our approach on the effects of robustness and accuracy, through the study of the evolution of decision boundary and trend curves of clean accuracy/attack success over differing degrees of stochasticity. We also shed light on the effects of adversarial training on a pre-trained network, through the lens of decision boundaries.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"17 1","pages":"7587-7594"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90637028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Local Grouped Invariant Order Pattern for Grayscale-Inversion and Rotation Invariant Texture Classification 灰度反演和旋转不变纹理分类的局部分组不变顺序模式
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412743
Yankai Huang, Tiecheng Song, Shuang Li, Yuanjing Han
Local binary pattern (LBP) based descriptors have shown effectiveness for texture classification. However, most of them encode the intensity relationships between neighboring pixels and a central pixel into binary forms, thereby failing to capture the complete ordering information among neighbors. Several methods have explored intensity order information for feature description, but they do not address the grayscale-inversion problem. In this paper, we propose an image descriptor called local grouped invariant order pattern (LGIOP) for grayscale-inversion and rotation invariant texture classification. Our LGIOP is a histogram representation which jointly encodes neighboring order information and central pixels. In particular, two new order encoding methods, i.e., intensity order encoding and distance order encoding, are proposed to describe the neighboring relationships. These two order encoding methods are not only complementary but also invariant to grayscale-inversion and rotation changes. Experiments for texture classification demonstrate that the proposed LGIOP descriptor is robust to (linear or nonlinear) grayscale inversion and image rotation.
基于局部二值模式(LBP)的描述符在纹理分类中表现出了良好的效果。然而,它们大多将相邻像素与中心像素之间的强度关系编码为二值形式,因此无法捕获相邻像素之间完整的顺序信息。有几种方法探索了特征描述的强度顺序信息,但它们没有解决灰度反演问题。本文提出了一种局部分组不变顺序模式(LGIOP)图像描述符,用于灰度反演和旋转不变纹理分类。我们的LGIOP是一个直方图表示,它联合编码相邻的顺序信息和中心像素。特别提出了两种新的阶数编码方法,即强度阶数编码和距离阶数编码来描述相邻关系。这两种编码方法不仅互补,而且对灰度反转和旋转变化具有不变性。纹理分类实验表明,所提出的LGIOP描述符对(线性或非线性)灰度反演和图像旋转具有鲁棒性。
{"title":"Local Grouped Invariant Order Pattern for Grayscale-Inversion and Rotation Invariant Texture Classification","authors":"Yankai Huang, Tiecheng Song, Shuang Li, Yuanjing Han","doi":"10.1109/ICPR48806.2021.9412743","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412743","url":null,"abstract":"Local binary pattern (LBP) based descriptors have shown effectiveness for texture classification. However, most of them encode the intensity relationships between neighboring pixels and a central pixel into binary forms, thereby failing to capture the complete ordering information among neighbors. Several methods have explored intensity order information for feature description, but they do not address the grayscale-inversion problem. In this paper, we propose an image descriptor called local grouped invariant order pattern (LGIOP) for grayscale-inversion and rotation invariant texture classification. Our LGIOP is a histogram representation which jointly encodes neighboring order information and central pixels. In particular, two new order encoding methods, i.e., intensity order encoding and distance order encoding, are proposed to describe the neighboring relationships. These two order encoding methods are not only complementary but also invariant to grayscale-inversion and rotation changes. Experiments for texture classification demonstrate that the proposed LGIOP descriptor is robust to (linear or nonlinear) grayscale inversion and image rotation.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"2015 1","pages":"6632-6639"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73560863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modified Single-Shot multibox Detector for beyond Real-Time Object Detection 一种改进的单镜头多盒检测器,用于超实时目标检测
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9413300
G. Orfanidis, K. Ioannidis, S. Vrochidis, A. Tefas, Y. Kompatsiaris
This works focuses on examining the performance of the Single Shot Detector (SSD) model in resource restricted systems where maintaining the power of the full model comprises a significant prerequisite. The proposed SSD variations examine the behavior of lighter versions of SSD while propose measures to limit the unavoidable performance shortage. The outcomes of the conducted research demonstrate a remarkable trade-off between performance losses, speed improvement and the required resource reservation. Thus, the experimental results evidence the efficiency of the presented SSD alterations towards accomplishing higher frame rates and retaining the performance of the original model.
这项工作的重点是在资源受限的系统中检查单次射击检测器(SSD)模型的性能,在这些系统中,保持完整模型的功率是一个重要的先决条件。建议的SSD变化检查了SSD的轻版本的行为,同时提出了限制不可避免的性能不足的措施。所进行的研究结果表明,在性能损失、速度提高和所需资源保留之间存在显著的权衡。因此,实验结果证明了所提出的SSD更改在实现更高帧速率和保留原始模型性能方面的效率。
{"title":"A modified Single-Shot multibox Detector for beyond Real-Time Object Detection","authors":"G. Orfanidis, K. Ioannidis, S. Vrochidis, A. Tefas, Y. Kompatsiaris","doi":"10.1109/ICPR48806.2021.9413300","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9413300","url":null,"abstract":"This works focuses on examining the performance of the Single Shot Detector (SSD) model in resource restricted systems where maintaining the power of the full model comprises a significant prerequisite. The proposed SSD variations examine the behavior of lighter versions of SSD while propose measures to limit the unavoidable performance shortage. The outcomes of the conducted research demonstrate a remarkable trade-off between performance losses, speed improvement and the required resource reservation. Thus, the experimental results evidence the efficiency of the presented SSD alterations towards accomplishing higher frame rates and retaining the performance of the original model.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"50 1","pages":"3977-3984"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78089484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks 人工神经网络中对抗性例子现象的解释
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412367
R. Barati, R. Safabakhsh, M. Rahmati
In this paper, we study the adversarial examples existence and adversarial training from the standpoint of convergence and provide evidence that pointwise convergence in ANNs can explain these observations. The main contribution of our proposal is that it relates the objective of the evasion attacks and adversarial training with concepts already defined in learning theory. Also, we extend and unify some of the other proposals in the literature and provide alternative explanations on the observations made in those proposals. Through different experiments, we demonstrate that the framework is valuable in the study of the phenomenon and is applicable to real-world problems.
本文从收敛的角度研究了对抗性示例的存在性和对抗性训练,并提供了证据,证明人工神经网络的点向收敛可以解释这些观察结果。我们的建议的主要贡献在于它将逃避攻击和对抗训练的目标与学习理论中已经定义的概念联系起来。此外,我们扩展和统一了文献中的一些其他建议,并对这些建议中的观察结果提供了替代解释。通过不同的实验,我们证明了该框架在现象研究中是有价值的,并且适用于现实世界的问题。
{"title":"Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks","authors":"R. Barati, R. Safabakhsh, M. Rahmati","doi":"10.1109/ICPR48806.2021.9412367","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412367","url":null,"abstract":"In this paper, we study the adversarial examples existence and adversarial training from the standpoint of convergence and provide evidence that pointwise convergence in ANNs can explain these observations. The main contribution of our proposal is that it relates the objective of the evasion attacks and adversarial training with concepts already defined in learning theory. Also, we extend and unify some of the other proposals in the literature and provide alternative explanations on the observations made in those proposals. Through different experiments, we demonstrate that the framework is valuable in the study of the phenomenon and is applicable to real-world problems.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"38 1","pages":"7036-7042"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78359301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Boundary-aware Distillation Network for Compressed Video Semantic Segmentation 面向压缩视频语义分割的边界感知蒸馏网络
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412821
Hongchao Lu, Zhidong Deng
In recent years optical flow is often estimated to reuse features so as to accelerate video semantic segmentation. With addition of optical flow network, however, extra cost may incur and accuracy may thus be degraded because of repeated warping operation. In this paper, we propose a boundary-aware distillation network (BDNet) that replaces optical flow network with block motion vectors encoded in compressed video, resulting in negligible computational complexity. In order to make salient features, an auxiliary boundary-aware stream is added to the main stream to jointly estimate silhouette and segmentation of objects. To further correct warped features, a well-trained teacher network is employed to transfer knowledge to the main stream. Both boundary-aware stream and the teacher network are neglected during inference stage, so that video segmentation network enables to get faster without increasing any computational burden. By splitting the task into three components, our BDNet shows almost 10% time saving as well as 1.6% accuracy improvement over baseline on the Cityscapes dataset.
近年来,为了加快视频语义分割的速度,经常对光流进行特征重用估计。然而,在增加光流网络后,由于重复的翘曲操作,可能会产生额外的成本,从而降低精度。在本文中,我们提出了一种边界感知的蒸馏网络(BDNet),该网络用压缩视频中编码的块运动向量取代光流网络,其计算复杂度可以忽略不计。为了突出特征,在主流的基础上增加一个辅助的边界感知流来共同估计轮廓和分割目标。为了进一步纠正扭曲的特征,使用训练有素的教师网络将知识转移到主流。在推理阶段忽略了边界感知流和教师网络,使得视频分割网络在不增加计算负担的情况下变得更快。通过将任务分成三个部分,我们的BDNet显示,与cityscape数据集的基线相比,节省了近10%的时间,准确率提高了1.6%。
{"title":"A Boundary-aware Distillation Network for Compressed Video Semantic Segmentation","authors":"Hongchao Lu, Zhidong Deng","doi":"10.1109/ICPR48806.2021.9412821","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412821","url":null,"abstract":"In recent years optical flow is often estimated to reuse features so as to accelerate video semantic segmentation. With addition of optical flow network, however, extra cost may incur and accuracy may thus be degraded because of repeated warping operation. In this paper, we propose a boundary-aware distillation network (BDNet) that replaces optical flow network with block motion vectors encoded in compressed video, resulting in negligible computational complexity. In order to make salient features, an auxiliary boundary-aware stream is added to the main stream to jointly estimate silhouette and segmentation of objects. To further correct warped features, a well-trained teacher network is employed to transfer knowledge to the main stream. Both boundary-aware stream and the teacher network are neglected during inference stage, so that video segmentation network enables to get faster without increasing any computational burden. By splitting the task into three components, our BDNet shows almost 10% time saving as well as 1.6% accuracy improvement over baseline on the Cityscapes dataset.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"325 1","pages":"5354-5359"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78412619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Image Inpainting with Contrastive Relation Network 基于对比关系网络的图像绘制
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412640
Xiaoqiang Zhou, Junjie Li, Zilei Wang, R. He, T. Tan
Image inpainting faces the challenging issue of the requirements on structure reasonableness and texture coherence. In this paper, we propose a two-stage inpainting framework to address this issue. The basic idea is to address the two requirements in two separate stages. Completed segmentation of the corrupted image is firstly predicted through segmentation reconstruction network, while fine-grained image details are restored in the second stage through an image generator. The two stages are connected in series as the image details are generated under the guidance of completed segmentation map that predicted in the first stage. Specifically, in the second stage, we propose a novel graph-based relation network to model the relationship existed in corrupted image. In relation network, both intra-relationship for pixels in the same semantic region and inter-relationship between different semantic parts are considered, improving the consistency and compatibility of image textures. Besides, contrastive loss is designed to facilitate the relation network training. Such a framework not only simplifies the inpainting problem directly, but also exploits the relationship in corrupted image explicitly. Extensive experiments on various public datasets quantitatively and qualitatively demonstrate the superiority of our approach compared with the state-of-the-art.
图像绘画面临着对结构合理性和纹理一致性要求的挑战。在本文中,我们提出了一个两阶段的绘画框架来解决这个问题。基本思想是在两个不同的阶段解决这两个需求。首先通过分割重建网络预测损坏图像的完整分割,第二阶段通过图像生成器恢复细粒度图像细节。两个阶段串联起来,在第一阶段预测完成的分割图的指导下生成图像细节。具体而言,在第二阶段,我们提出了一种新的基于图的关系网络来建模存在于损坏图像中的关系。在关系网络中,既考虑了同一语义区域像素之间的相互关系,又考虑了不同语义部分之间的相互关系,提高了图像纹理的一致性和兼容性。此外,还设计了对比损失,便于关系网络的训练。该框架不仅直接简化了修复问题,而且明确地利用了损坏图像之间的关系。在各种公共数据集上进行的大量定量和定性实验表明,与最先进的方法相比,我们的方法具有优势。
{"title":"Image Inpainting with Contrastive Relation Network","authors":"Xiaoqiang Zhou, Junjie Li, Zilei Wang, R. He, T. Tan","doi":"10.1109/ICPR48806.2021.9412640","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412640","url":null,"abstract":"Image inpainting faces the challenging issue of the requirements on structure reasonableness and texture coherence. In this paper, we propose a two-stage inpainting framework to address this issue. The basic idea is to address the two requirements in two separate stages. Completed segmentation of the corrupted image is firstly predicted through segmentation reconstruction network, while fine-grained image details are restored in the second stage through an image generator. The two stages are connected in series as the image details are generated under the guidance of completed segmentation map that predicted in the first stage. Specifically, in the second stage, we propose a novel graph-based relation network to model the relationship existed in corrupted image. In relation network, both intra-relationship for pixels in the same semantic region and inter-relationship between different semantic parts are considered, improving the consistency and compatibility of image textures. Besides, contrastive loss is designed to facilitate the relation network training. Such a framework not only simplifies the inpainting problem directly, but also exploits the relationship in corrupted image explicitly. Extensive experiments on various public datasets quantitatively and qualitatively demonstrate the superiority of our approach compared with the state-of-the-art.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"272 1","pages":"4420-4427"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75776382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Privacy Attributes-aware Message Passing Neural Network for Visual Privacy Attributes Classification 基于隐私属性感知的消息传递神经网络可视化隐私属性分类
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412853
Hanbin Hong, Wentao Bao, Yuan Hong, Yu Kong
Visual Privacy Attribute Classification (VPAC) identifies privacy information leakage via social media images. These images containing privacy attributes such as skin color, face or gender are classified into multiple privacy attribute categories in VPAC. With limited works in this task, current methods often extract features from images and simply classify the extracted feature into multiple privacy attribute classes. The dependencies between privacy attributes, e.g., skin color and face typically coexist in the same image, are usually ignored in classification, which causes performance degradation in VPAC. In this paper, we propose a novel end-to-end Privacy Attributes-aware Message Passing Neural Network (PA-MPNN) to address VPAC. Privacy attributes are considered as nodes on a graph and an MPNN is introduced to model the privacy attribute dependencies. To generate representative features for privacy attribute nodes, a class-wise encoder-decoder is proposed to learn a latent space for each attribute. An attention mechanism with multiple correlation matrices is also introduced in MPNN to learn the privacy attributes graph automatically. Experimental results on the Privacy Attribute Dataset demonstrate that our framework achieves better performance than state-of-the-art methods for visual privacy attributes classification.
视觉隐私属性分类(Visual Privacy Attribute Classification, VPAC)用于识别通过社交媒体图像泄露的隐私信息。这些包含隐私属性(如肤色、面部或性别)的图像在VPAC中被分类为多个隐私属性类别。由于这项任务的工作量有限,目前的方法通常是从图像中提取特征,并简单地将提取的特征分类为多个隐私属性类。隐私属性之间的依赖关系,例如肤色和面部通常共存于同一图像中,通常在分类中被忽略,从而导致VPAC的性能下降。在本文中,我们提出了一种新的端到端隐私属性感知消息传递神经网络(PA-MPNN)来解决VPAC问题。将隐私属性视为图上的节点,并引入MPNN对隐私属性依赖关系进行建模。为了生成隐私属性节点的代表性特征,提出了一种基于类的编码器-解码器来学习每个属性的潜在空间。在MPNN中引入了多关联矩阵的关注机制,实现了对隐私属性图的自动学习。在隐私属性数据集上的实验结果表明,我们的框架比目前最先进的视觉隐私属性分类方法具有更好的性能。
{"title":"Privacy Attributes-aware Message Passing Neural Network for Visual Privacy Attributes Classification","authors":"Hanbin Hong, Wentao Bao, Yuan Hong, Yu Kong","doi":"10.1109/ICPR48806.2021.9412853","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412853","url":null,"abstract":"Visual Privacy Attribute Classification (VPAC) identifies privacy information leakage via social media images. These images containing privacy attributes such as skin color, face or gender are classified into multiple privacy attribute categories in VPAC. With limited works in this task, current methods often extract features from images and simply classify the extracted feature into multiple privacy attribute classes. The dependencies between privacy attributes, e.g., skin color and face typically coexist in the same image, are usually ignored in classification, which causes performance degradation in VPAC. In this paper, we propose a novel end-to-end Privacy Attributes-aware Message Passing Neural Network (PA-MPNN) to address VPAC. Privacy attributes are considered as nodes on a graph and an MPNN is introduced to model the privacy attribute dependencies. To generate representative features for privacy attribute nodes, a class-wise encoder-decoder is proposed to learn a latent space for each attribute. An attention mechanism with multiple correlation matrices is also introduced in MPNN to learn the privacy attributes graph automatically. Experimental results on the Privacy Attribute Dataset demonstrate that our framework achieves better performance than state-of-the-art methods for visual privacy attributes classification.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"74 1","pages":"4245-4251"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74802655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lookalike Disambiguation: Improving Face Identification Performance at Top Ranks 相似消歧:提高高层人员的面部识别性能
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412063
Thomas Swearingen, A. Ross
A face identification system compares an unknown input probe image to a gallery of labeled face images in order to determine the identity of the probe image. The result of identification is a ranked match list with the most similar gallery face image at the top (rank 1) and the least similar gallery face image at the bottom. In many systems, the top ranked gallery images may look very similar to the probe image as well as to each other and can sometimes result in the misidentification of the probe image. Such similar looking faces pertaining to different identities are referred to as lookalike faces. We hypothesize that a matcher specifically trained to disambiguate lookalike face images when combined with a regular face matcher will improve overall identification performance. This work proposes reranking the initial ranked match list using a disambiguator especially for lookalike face pairs. This work also evaluates schemes to select gallery images in the initial ranked match list that should be re- ranked. Experiments on the challenging TinyFace dataset shows that the proposed approach improves the closed-set identification accuracy of a state-of-the-art face matcher.
人脸识别系统将未知的输入探针图像与标记的人脸图像库进行比较,以确定探针图像的身份。识别的结果是一个排序匹配列表,最相似的画廊人脸图像位于顶部(排名1),最不相似的画廊人脸图像位于底部。在许多系统中,排名靠前的图库图像可能看起来与探测图像非常相似,而且彼此之间也非常相似,有时可能导致对探测图像的错误识别。这种具有不同身份的相似面孔被称为“相似面孔”。我们假设,经过专门训练来消除相似人脸图像歧义的匹配器与常规人脸匹配器相结合,将提高整体识别性能。这项工作提出使用消歧义器对初始排名匹配列表进行重新排序,特别是对于相似的面孔对。这项工作还评估了在初始排名匹配列表中选择应该重新排名的画廊图像的方案。在具有挑战性的TinyFace数据集上的实验表明,该方法提高了最先进的人脸匹配器的闭集识别精度。
{"title":"Lookalike Disambiguation: Improving Face Identification Performance at Top Ranks","authors":"Thomas Swearingen, A. Ross","doi":"10.1109/ICPR48806.2021.9412063","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412063","url":null,"abstract":"A face identification system compares an unknown input probe image to a gallery of labeled face images in order to determine the identity of the probe image. The result of identification is a ranked match list with the most similar gallery face image at the top (rank 1) and the least similar gallery face image at the bottom. In many systems, the top ranked gallery images may look very similar to the probe image as well as to each other and can sometimes result in the misidentification of the probe image. Such similar looking faces pertaining to different identities are referred to as lookalike faces. We hypothesize that a matcher specifically trained to disambiguate lookalike face images when combined with a regular face matcher will improve overall identification performance. This work proposes reranking the initial ranked match list using a disambiguator especially for lookalike face pairs. This work also evaluates schemes to select gallery images in the initial ranked match list that should be re- ranked. Experiments on the challenging TinyFace dataset shows that the proposed approach improves the closed-set identification accuracy of a state-of-the-art face matcher.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"33 1","pages":"10508-10515"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73158561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
BCAU-Net: A Novel Architecture with Binary Channel Attention Module for MRI Brain Segmentation bcac - net:一种具有二元通道注意模块的MRI脑分割新架构
Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9413051
Yongpei Zhu, Zicong Zhou, G. Liao, Kehong Yuan
Recently deep learning-based networks have achieved advanced performance in medical image segmentation. However, the development of deep learning is slow in magnetic resonance image (MRI) segmentation of normal brain tissues. In this paper, inspired by channel attention module, we propose a new architecture, Binary Channel Attention U-Net (BCAU- Net), by introducing a novel Binary Channel Attention Module (BCAM) into skip connection of U-Net, which can take full advantages of the channel information extracted from the encoding path and corresponding decoding path. To better aggregate multiscale spatial information of the feature map, spatial pyramid pooling (SPP) modules with different pooling operations are used in BCAM instead of original average-pooling and max-pooling operations. We verify this model on two datasets including IBSR and MRBrainS18, and obtain better performance on MRI brain segmentation compared with other methods. We believe the proposed method can advance the performance in brain segmentation and clinical diagnosis.
近年来,基于深度学习的网络在医学图像分割方面取得了较好的效果。然而,深度学习在正常脑组织的磁共振图像分割方面发展缓慢。本文受信道注意模块的启发,通过在U-Net的跳接中引入一种新颖的二进制信道注意模块(BCAM),充分利用从编码路径和相应的解码路径中提取的信道信息,提出了一种新的结构——二进制信道注意U-Net (BCAU- Net)。为了更好地聚合特征图的多尺度空间信息,在BCAM中使用了不同池化操作的空间金字塔池化(SPP)模块来代替原有的平均池化和最大池化操作。我们在IBSR和MRBrainS18两个数据集上验证了该模型,与其他方法相比,获得了更好的MRI脑分割性能。我们相信该方法可以提高脑分割和临床诊断的性能。
{"title":"BCAU-Net: A Novel Architecture with Binary Channel Attention Module for MRI Brain Segmentation","authors":"Yongpei Zhu, Zicong Zhou, G. Liao, Kehong Yuan","doi":"10.1109/ICPR48806.2021.9413051","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9413051","url":null,"abstract":"Recently deep learning-based networks have achieved advanced performance in medical image segmentation. However, the development of deep learning is slow in magnetic resonance image (MRI) segmentation of normal brain tissues. In this paper, inspired by channel attention module, we propose a new architecture, Binary Channel Attention U-Net (BCAU- Net), by introducing a novel Binary Channel Attention Module (BCAM) into skip connection of U-Net, which can take full advantages of the channel information extracted from the encoding path and corresponding decoding path. To better aggregate multiscale spatial information of the feature map, spatial pyramid pooling (SPP) modules with different pooling operations are used in BCAM instead of original average-pooling and max-pooling operations. We verify this model on two datasets including IBSR and MRBrainS18, and obtain better performance on MRI brain segmentation compared with other methods. We believe the proposed method can advance the performance in brain segmentation and clinical diagnosis.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"25 1","pages":"5690-5695"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74402102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 25th International Conference on Pattern Recognition (ICPR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1