首页 > 最新文献

2019 International Conference on Biometrics (ICB)最新文献

英文 中文
Seg-Edge Bilateral Constraint Network for Iris Segmentation 虹膜分割的Seg-Edge双边约束网络
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987405
Junxing Hu, Hui Zhang, Lihu Xiao, Jing Liu, Xingguang Li, Zhaofeng He, Ling Li
Iris semantic segmentation in less-constrained scenarios is the basis of iris recognition. We propose an end-to-end trainable model for iris segmentation, namely Seg-Edge bilateral constraint network (SEN). The SEN uses the edge map and the coarse segmentation to constrain and optimize mutually to produce accurate iris segmentation results. The iris edge map generated from low level convolutional layers passes detailed edge information to iris segmentation, and the iris region generated by high level semantic segmentation constrains the edge filtering scope which makes the edge aware focusing on interesting objects. Moreover, we propose pruning filters and corresponding feature maps that are identified as useless by l1-norm, which results in a lightweight iris segmentation network while keeping the performance almost intact or even better. Experimental results suggest that the proposed method outperforms the state-of-the-art iris segmentation methods.
无约束场景下的虹膜语义分割是虹膜识别的基础。我们提出了一种端到端可训练的虹膜分割模型,即Seg-Edge双边约束网络(SEN)。SEN使用边缘映射和粗分割相互约束和优化,以产生准确的虹膜分割结果。低层卷积层生成的虹膜边缘图将详细的边缘信息传递给虹膜分割,高层语义分割生成的虹膜区域约束了边缘滤波的范围,使得边缘感知集中在感兴趣的对象上。此外,我们提出了被11范数识别为无用的剪枝滤波器和相应的特征映射,从而在保持性能几乎不变甚至更好的情况下获得了轻量级的虹膜分割网络。实验结果表明,该方法优于现有的虹膜分割方法。
{"title":"Seg-Edge Bilateral Constraint Network for Iris Segmentation","authors":"Junxing Hu, Hui Zhang, Lihu Xiao, Jing Liu, Xingguang Li, Zhaofeng He, Ling Li","doi":"10.1109/ICB45273.2019.8987405","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987405","url":null,"abstract":"Iris semantic segmentation in less-constrained scenarios is the basis of iris recognition. We propose an end-to-end trainable model for iris segmentation, namely Seg-Edge bilateral constraint network (SEN). The SEN uses the edge map and the coarse segmentation to constrain and optimize mutually to produce accurate iris segmentation results. The iris edge map generated from low level convolutional layers passes detailed edge information to iris segmentation, and the iris region generated by high level semantic segmentation constrains the edge filtering scope which makes the edge aware focusing on interesting objects. Moreover, we propose pruning filters and corresponding feature maps that are identified as useless by l1-norm, which results in a lightweight iris segmentation network while keeping the performance almost intact or even better. Experimental results suggest that the proposed method outperforms the state-of-the-art iris segmentation methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131933946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Face Anti-spoofing using Hybrid Residual Learning Framework 基于混合残差学习框架的人脸防欺骗
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987283
Usman Muhammad, A. Hadid
Face spoofing attacks have received significant attention because of criminals who are developing different techniques such as warped photos, cut photos, 3D masks, etc. to easily fool the face recognition systems. In order to improve the security measures of biometric systems, deep learning models offer powerful solutions; but to attain the benefits of multilayer features remains a significant challenge. To alleviate this limitation, this paper presents a hybrid framework to build the feature representation by fusing ResNet with more discriminative power. First, two variants of the residual learning framework are selected as deep feature extractors to extract informative features. Second, the fullyconnected layers are used as separated feature descriptors. Third, PCA based Canonical correlation analysis (CCA) is proposed as a feature fusion strategy to combine relevant information and to improve the features’ discrimination capacity. Finally, the support vector machine (SVM) is used to construct the final representation of facial features. Experimental results show that our proposed framework achieves a state-of-the-art performance without finetuning, data augmentation or coding strategy on benchmark databases, namely the MSU mobile face spoof database and the CASIA face anti-spoofing database.
人脸欺骗攻击越来越受到关注,因为犯罪分子正在开发各种技术,如扭曲照片、剪切照片、3D面具等,以轻松欺骗人脸识别系统。为了提高生物识别系统的安全措施,深度学习模型提供了强大的解决方案;但要获得多层特征的好处仍然是一个重大挑战。为了缓解这一局限性,本文提出了一种混合框架,通过融合具有更强判别能力的ResNet来构建特征表示。首先,选取残差学习框架的两个变体作为深度特征提取器,提取信息特征;其次,将完全连接的层用作分离的特征描述符。第三,提出了基于PCA的典型相关分析(CCA)作为特征融合策略,结合相关信息,提高特征的识别能力。最后,利用支持向量机(SVM)构建人脸特征的最终表示。实验结果表明,我们提出的框架在MSU移动人脸欺骗数据库和CASIA人脸防欺骗数据库的基准数据库上,无需调优、数据增强或编码策略,即可实现最先进的性能。
{"title":"Face Anti-spoofing using Hybrid Residual Learning Framework","authors":"Usman Muhammad, A. Hadid","doi":"10.1109/ICB45273.2019.8987283","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987283","url":null,"abstract":"Face spoofing attacks have received significant attention because of criminals who are developing different techniques such as warped photos, cut photos, 3D masks, etc. to easily fool the face recognition systems. In order to improve the security measures of biometric systems, deep learning models offer powerful solutions; but to attain the benefits of multilayer features remains a significant challenge. To alleviate this limitation, this paper presents a hybrid framework to build the feature representation by fusing ResNet with more discriminative power. First, two variants of the residual learning framework are selected as deep feature extractors to extract informative features. Second, the fullyconnected layers are used as separated feature descriptors. Third, PCA based Canonical correlation analysis (CCA) is proposed as a feature fusion strategy to combine relevant information and to improve the features’ discrimination capacity. Finally, the support vector machine (SVM) is used to construct the final representation of facial features. Experimental results show that our proposed framework achieves a state-of-the-art performance without finetuning, data augmentation or coding strategy on benchmark databases, namely the MSU mobile face spoof database and the CASIA face anti-spoofing database.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"os-53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127846789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Mobile Face Recognition Systems: Exploring Presentation Attack Vulnerability and Usability 移动人脸识别系统:探索表示攻击漏洞和可用性
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987404
H. Hofbauer, L. Debiasi, A. Uhl
We have evaluated face recognition software to be used with hand held devices (smartphones). While we can not go into specifics of the systems under test (due to NDAs), we can present the results of our evaluation of liveness detection (or presentation attack detection), matching performance, and success with different complexity levels of attacks. We will contrast the robustness against presentation attacks with the systems usability during regular use, and highlight where currently state of commercial of the shelf systems (COTS) stand in that regard. We will look at the results specifically under the tradeoff between acceptance, linked with usability, and security, which usually negatively impacts usability.
我们已经评估了用于手持设备(智能手机)的面部识别软件。虽然我们不能进入测试系统的细节(由于nda),但我们可以展示我们对活动性检测(或表示攻击检测)、匹配性能以及不同攻击复杂性级别的成功的评估结果。我们将对比在常规使用过程中对表示攻击的鲁棒性和系统的可用性,并强调货架系统(COTS)在这方面的当前商业状态。我们将在可接受性(与可用性相关)和安全性(通常会对可用性产生负面影响)之间进行权衡的情况下,特别关注结果。
{"title":"Mobile Face Recognition Systems: Exploring Presentation Attack Vulnerability and Usability","authors":"H. Hofbauer, L. Debiasi, A. Uhl","doi":"10.1109/ICB45273.2019.8987404","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987404","url":null,"abstract":"We have evaluated face recognition software to be used with hand held devices (smartphones). While we can not go into specifics of the systems under test (due to NDAs), we can present the results of our evaluation of liveness detection (or presentation attack detection), matching performance, and success with different complexity levels of attacks. We will contrast the robustness against presentation attacks with the systems usability during regular use, and highlight where currently state of commercial of the shelf systems (COTS) stand in that regard. We will look at the results specifically under the tradeoff between acceptance, linked with usability, and security, which usually negatively impacts usability.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129252572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Extent of Longitudinal Finger Rotation in Publicly Available Finger Vein Data Sets 公开可用的指静脉数据集中手指纵向旋转的程度
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987384
B. Prommegger, Christof Kauba, A. Uhl
Finger vein recognition deals with the identification of a subjects based on its venous pattern within the fingers. The majority of the publicly available finger vein data sets has been acquired with the help of scanner devices that capture a single finger from the palmar side using light transmission. Some of them are equipped with a contact surface or other structures to support in finger placement. However, these means are not able to prevent all possible types of finger misplacements, in particular longitudinal finger rotation can not be averted. It has been shown that this type of finger rotation results in a non-linear deformation of the vein structure, causing severe problems to finger vein recognition systems. So far it is not known if and to which extent this longitudinal finger rotation is present in publicly available finger vein data sets. This paper evaluates the presence of longitudinal finger rotation and its extent in four publicly available finger vein data sets and provides the estimated rotation angles to the scientific public. This additional information will increase the value of the evaluated data sets. To verify the correctness of the estimated rotation angles, we furthermore demonstrate that employing a simple rotation correction, using those rotation angles, improves the recognition performance.
手指静脉识别处理的是基于对象手指内静脉模式的识别。大多数公开可用的手指静脉数据集都是在扫描仪设备的帮助下获得的,这些设备使用光透射技术从掌侧捕获单个手指。其中一些配备了接触面或其他结构来支持手指的放置。然而,这些手段并不能防止所有可能类型的手指错位,特别是纵向手指旋转无法避免。研究表明,这种类型的手指旋转会导致静脉结构的非线性变形,给手指静脉识别系统带来严重的问题。到目前为止,尚不清楚这种手指纵向旋转是否以及在多大程度上存在于公开的手指静脉数据集中。本文评估了四个公开可用的手指静脉数据集中手指纵向旋转的存在及其程度,并向科学公众提供了估计的旋转角度。这些附加信息将增加评估数据集的价值。为了验证估计的旋转角度的正确性,我们进一步证明了使用这些旋转角度进行简单的旋转校正可以提高识别性能。
{"title":"On the Extent of Longitudinal Finger Rotation in Publicly Available Finger Vein Data Sets","authors":"B. Prommegger, Christof Kauba, A. Uhl","doi":"10.1109/ICB45273.2019.8987384","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987384","url":null,"abstract":"Finger vein recognition deals with the identification of a subjects based on its venous pattern within the fingers. The majority of the publicly available finger vein data sets has been acquired with the help of scanner devices that capture a single finger from the palmar side using light transmission. Some of them are equipped with a contact surface or other structures to support in finger placement. However, these means are not able to prevent all possible types of finger misplacements, in particular longitudinal finger rotation can not be averted. It has been shown that this type of finger rotation results in a non-linear deformation of the vein structure, causing severe problems to finger vein recognition systems. So far it is not known if and to which extent this longitudinal finger rotation is present in publicly available finger vein data sets. This paper evaluates the presence of longitudinal finger rotation and its extent in four publicly available finger vein data sets and provides the estimated rotation angles to the scientific public. This additional information will increase the value of the evaluated data sets. To verify the correctness of the estimated rotation angles, we furthermore demonstrate that employing a simple rotation correction, using those rotation angles, improves the recognition performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126503469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Cross-spectrum thermal to visible face recognition based on cascaded image synthesis 基于级联图像合成的跨光谱热到可见人脸识别
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987347
Khawla Mallat, N. Damer, F. Boutros, Arjan Kuijper, J. Dugelay
Face synthesis from thermal to visible spectrum is fundamental to perform cross-spectrum face recognition as it simplifies its integration in existing commercial face recognition systems and enables manual face verification. In this paper, a new solution based on cascaded refinement networks is proposed. This method generates visible-like colored images of high visual quality without requiring large amounts of training data. By employing a contextual loss function during training, the proposed network is inherently scale and rotation invariant. We discuss the visual perception of the generated visible-like faces in comparison with recent works. We also provide an objective evaluation in terms of cross-spectrum face recognition, where the generated faces were compared against a gallery in visible spectrum using two state-of-the-art deep learning based face recognition systems. When compared to the recently published TV-GAN solution, the performance of the face recognition systems, OpenFace and LightCNN, was improved by a 42.48% (i.e. from 10.76% to 15.37%) and a 71.43% (i.e. from 33.606% to 57.612%), respectively.
从热光谱到可见光谱的人脸合成是进行跨光谱人脸识别的基础,因为它简化了现有商用人脸识别系统的集成,并实现了手动人脸验证。本文提出了一种基于级联细化网络的解决方案。这种方法不需要大量的训练数据就能生成高视觉质量的类似可见的彩色图像。通过在训练过程中使用上下文损失函数,所提出的网络具有固有的尺度和旋转不变性。我们讨论了与最近的作品相比较,生成的可视面孔的视觉感知。我们还提供了跨光谱人脸识别方面的客观评估,其中使用两个最先进的基于深度学习的人脸识别系统将生成的人脸与可见光谱中的画廊进行比较。与最近发表的TV-GAN解决方案相比,人脸识别系统OpenFace和LightCNN的性能分别提高了42.48%(即从10.76%提高到15.37%)和71.43%(即从33.606%提高到57.612%)。
{"title":"Cross-spectrum thermal to visible face recognition based on cascaded image synthesis","authors":"Khawla Mallat, N. Damer, F. Boutros, Arjan Kuijper, J. Dugelay","doi":"10.1109/ICB45273.2019.8987347","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987347","url":null,"abstract":"Face synthesis from thermal to visible spectrum is fundamental to perform cross-spectrum face recognition as it simplifies its integration in existing commercial face recognition systems and enables manual face verification. In this paper, a new solution based on cascaded refinement networks is proposed. This method generates visible-like colored images of high visual quality without requiring large amounts of training data. By employing a contextual loss function during training, the proposed network is inherently scale and rotation invariant. We discuss the visual perception of the generated visible-like faces in comparison with recent works. We also provide an objective evaluation in terms of cross-spectrum face recognition, where the generated faces were compared against a gallery in visible spectrum using two state-of-the-art deep learning based face recognition systems. When compared to the recently published TV-GAN solution, the performance of the face recognition systems, OpenFace and LightCNN, was improved by a 42.48% (i.e. from 10.76% to 15.37%) and a 71.43% (i.e. from 33.606% to 57.612%), respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121401697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Deep Learning from 3DLBP Descriptors for Depth Image Based Face Recognition 基于深度图像人脸识别的3DLBP描述符深度学习
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987432
João Baptista Cardia Neto, A. Marana, C. Ferrari, S. Berretti, A. Bimbo
In this paper, we propose a new framework for face recognition from depth images, which is both effective and efficient. It consists of two main stages: First, a handcrafted low-level feature extractor is applied to the raw depth data of the face, thus extracting the corresponding descriptor images (DIs); Then, a not-so-deep (shallow) convolutional neural network (SCNN) has been designed that learns from the DIs. This architecture showed two main advantages over the direct application of deep-CNN (DCNN) to the depth images of the face: On the one hand, the DIs are capable of enriching the raw depth data, emphasizing relevant traits of the face, while reducing their acquisition noise. This resulted decisive in improving the learning capability of the network; On the other, the DIs capture low-level features of the face, thus playing the role for the SCNN as the first layers do in a DCNN architecture. In this way, the SCNN we have designed has much less layers and can be trained more easily and faster. Extensive experiments on low- and high-resolution depth face datasets confirmed us the above advantages, showing results that are comparable or superior to the state-of-the-art, using by far less training data, time, and memory occupancy of the network.
本文提出了一种新的基于深度图像的人脸识别框架,该框架既有效又高效。该算法主要包括两个阶段:首先,对人脸原始深度数据应用手工制作的底层特征提取器,提取相应的描述符图像(DIs);然后,设计了一个不那么深(浅)的卷积神经网络(SCNN),从深度图像中学习。与直接将深度cnn (DCNN)应用于人脸深度图像相比,该架构显示出两个主要优势:一方面,深度数据能够丰富原始深度数据,强调人脸的相关特征,同时降低采集噪声。这对提高网络的学习能力起到了决定性的作用;另一方面,DIs捕获面部的低级特征,从而扮演SCNN的角色,就像DCNN架构中的第一层一样。通过这种方式,我们设计的SCNN具有更少的层,并且可以更容易和更快地训练。在低分辨率和高分辨率深度人脸数据集上进行的大量实验证实了我们的上述优势,显示出的结果与最先进的技术相当或优于最先进的技术,使用的训练数据、时间和网络的内存占用都要少得多。
{"title":"Deep Learning from 3DLBP Descriptors for Depth Image Based Face Recognition","authors":"João Baptista Cardia Neto, A. Marana, C. Ferrari, S. Berretti, A. Bimbo","doi":"10.1109/ICB45273.2019.8987432","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987432","url":null,"abstract":"In this paper, we propose a new framework for face recognition from depth images, which is both effective and efficient. It consists of two main stages: First, a handcrafted low-level feature extractor is applied to the raw depth data of the face, thus extracting the corresponding descriptor images (DIs); Then, a not-so-deep (shallow) convolutional neural network (SCNN) has been designed that learns from the DIs. This architecture showed two main advantages over the direct application of deep-CNN (DCNN) to the depth images of the face: On the one hand, the DIs are capable of enriching the raw depth data, emphasizing relevant traits of the face, while reducing their acquisition noise. This resulted decisive in improving the learning capability of the network; On the other, the DIs capture low-level features of the face, thus playing the role for the SCNN as the first layers do in a DCNN architecture. In this way, the SCNN we have designed has much less layers and can be trained more easily and faster. Extensive experiments on low- and high-resolution depth face datasets confirmed us the above advantages, showing results that are comparable or superior to the state-of-the-art, using by far less training data, time, and memory occupancy of the network.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127965012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Understanding Confounding Factors in Face Detection and Recognition 理解人脸检测与识别中的混杂因素
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987419
Janet Anderson, C. Otto, Brianna Maze, N. Kalka, James A. Duncan
Currently, face recognition systems perform at or above human-levels on media captured under controlled conditions. However, confounding factors such as pose, illumination, and expression (PIE), as well as facial hair, gender, skin tone, age, and resolution, can degrade performance, especially when large variations are present. We utilize the IJB-C dataset to investigate the impact of confounding factors on both face detection accuracy and face verification genuine matcher scores. Since IJB-C was collected without the use of a face detector, it can be used to evaluate face detection performance, and it contains large variations in pose, illumination, expression, and other factors. We also use a linear regression model analysis to identify which confounding factors are most influential for face verification performance.
目前,人脸识别系统在受控条件下捕获的媒体上的表现达到或超过人类水平。然而,姿势、光照和表情(PIE)以及面部毛发、性别、肤色、年龄和分辨率等混杂因素会降低性能,尤其是在存在较大差异的情况下。我们利用IJB-C数据集来研究混杂因素对人脸检测精度和人脸验证真实匹配分数的影响。由于IJB-C是在没有使用人脸检测器的情况下采集的,因此可以用来评估人脸检测性能,并且它包含姿势、光照、表情等因素的较大变化。我们还使用线性回归模型分析来确定哪些混杂因素对人脸验证性能影响最大。
{"title":"Understanding Confounding Factors in Face Detection and Recognition","authors":"Janet Anderson, C. Otto, Brianna Maze, N. Kalka, James A. Duncan","doi":"10.1109/ICB45273.2019.8987419","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987419","url":null,"abstract":"Currently, face recognition systems perform at or above human-levels on media captured under controlled conditions. However, confounding factors such as pose, illumination, and expression (PIE), as well as facial hair, gender, skin tone, age, and resolution, can degrade performance, especially when large variations are present. We utilize the IJB-C dataset to investigate the impact of confounding factors on both face detection accuracy and face verification genuine matcher scores. Since IJB-C was collected without the use of a face detector, it can be used to evaluate face detection performance, and it contains large variations in pose, illumination, expression, and other factors. We also use a linear regression model analysis to identify which confounding factors are most influential for face verification performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"3 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132547034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Crafting A Panoptic Face Presentation Attack Detector 制作一个全景面部呈现攻击检测器
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987257
Suril Mehta, A. Uberoi, Akshay Agarwal, Mayank Vatsa, Richa Singh
With the advancements in technology and growing popularity of facial photo editing in the social media landscape, tools such as face swapping and face morphing have become increasingly accessible to the general public. It opens up the possibilities for different kinds of face presentation attacks, which can be taken advantage of by impostors to gain unauthorized access of a biometric system. Moreover, the wide availability of 3D printers has caused a shift from print attacks to 3D mask attacks. With increasing types of attacks, it is necessary to come up with a generic and ubiquitous algorithm with a panoptic view of these attacks, and can detect a spoofed image irrespective of the method used. The key contribution of this paper is designing a deep learning based panoptic algorithm for detection of both digital and physical presentation attacks using Cross Asymmetric Loss Function (CALF). The performance is evaluated for digital and physical attacks in three scenarios: ubiquitous environment, individual databases, and cross-attack/cross-database. Experimental results showcase the superior performance of the proposed presentation attack detection algorithm.
随着科技的进步和面部照片编辑在社交媒体领域的日益普及,面部交换和面部变形等工具越来越容易被公众使用。它为不同类型的面部呈现攻击提供了可能性,冒名顶替者可以利用这种攻击来未经授权访问生物识别系统。此外,3D打印机的广泛可用性导致了从打印攻击到3D掩码攻击的转变。随着攻击类型的增加,有必要提出一种通用的、普遍存在的算法,对这些攻击进行全面的观察,并且可以检测出欺骗的图像,而不管使用哪种方法。本文的关键贡献是设计了一种基于深度学习的全光算法,用于使用交叉不对称损失函数(CALF)检测数字和物理表示攻击。在三种情况下评估了数字和物理攻击的性能:无处不在的环境、单个数据库和交叉攻击/跨数据库。实验结果表明,该算法具有良好的性能。
{"title":"Crafting A Panoptic Face Presentation Attack Detector","authors":"Suril Mehta, A. Uberoi, Akshay Agarwal, Mayank Vatsa, Richa Singh","doi":"10.1109/ICB45273.2019.8987257","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987257","url":null,"abstract":"With the advancements in technology and growing popularity of facial photo editing in the social media landscape, tools such as face swapping and face morphing have become increasingly accessible to the general public. It opens up the possibilities for different kinds of face presentation attacks, which can be taken advantage of by impostors to gain unauthorized access of a biometric system. Moreover, the wide availability of 3D printers has caused a shift from print attacks to 3D mask attacks. With increasing types of attacks, it is necessary to come up with a generic and ubiquitous algorithm with a panoptic view of these attacks, and can detect a spoofed image irrespective of the method used. The key contribution of this paper is designing a deep learning based panoptic algorithm for detection of both digital and physical presentation attacks using Cross Asymmetric Loss Function (CALF). The performance is evaluated for digital and physical attacks in three scenarios: ubiquitous environment, individual databases, and cross-attack/cross-database. Experimental results showcase the superior performance of the proposed presentation attack detection algorithm.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133478998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A New Approach for EEG-Based Biometric Authentication Using Auditory Stimulation 听觉刺激下基于脑电图的生物识别认证新方法
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987271
Sherif Nagib Abbas Seha, D. Hatzinakos
In this paper, a new approach is followed for the human recognition task using brainwave responses to auditory stimulation. A system based on this class of brainwaves benefits extra features over conventional traits being more secure, harder to spoof, and cancelable. For this purpose, EEG signals were recorded from 21 subjects while listening to modulated auditory tones in a single- and two-session setups. Three different types of features were evaluated based on the energy and the entropy estimation of the EEG sub-band rhythms using narrow band Gaussian filtering and wavelet packet decomposition. These features are classified using discriminant analysis in identification and verification modes of authentication. Based on the achieved results, high recognition rates up to 97.18% and low error rates down to 4.3% were achieved in single session setup. Moreover, in a two-session setup, the proposed system in this paper is shown to be more time-permanent in comparison to previous works.
本文提出了一种利用听觉刺激下的脑波响应来完成人类识别任务的新方法。基于这类脑电波的系统比传统的特征更安全、更难以欺骗和可取消。为此,在单次和两次设置中,记录了21名受试者在听调制的听觉音调时的脑电图信号。基于窄带高斯滤波和小波包分解对脑电子带节律的能量和熵估计,对三种不同类型的特征进行评估。这些特征在认证的识别和验证模式中使用判别分析进行分类。基于所取得的结果,在单会话设置中实现了97.18%的高识别率和4.3%的低错误率。此外,在两个会话设置中,与以前的工作相比,本文提出的系统更具时间永久性。
{"title":"A New Approach for EEG-Based Biometric Authentication Using Auditory Stimulation","authors":"Sherif Nagib Abbas Seha, D. Hatzinakos","doi":"10.1109/ICB45273.2019.8987271","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987271","url":null,"abstract":"In this paper, a new approach is followed for the human recognition task using brainwave responses to auditory stimulation. A system based on this class of brainwaves benefits extra features over conventional traits being more secure, harder to spoof, and cancelable. For this purpose, EEG signals were recorded from 21 subjects while listening to modulated auditory tones in a single- and two-session setups. Three different types of features were evaluated based on the energy and the entropy estimation of the EEG sub-band rhythms using narrow band Gaussian filtering and wavelet packet decomposition. These features are classified using discriminant analysis in identification and verification modes of authentication. Based on the achieved results, high recognition rates up to 97.18% and low error rates down to 4.3% were achieved in single session setup. Moreover, in a two-session setup, the proposed system in this paper is shown to be more time-permanent in comparison to previous works.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114517495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Alignment Free and Distortion Robust Iris Recognition 无对准和失真鲁棒虹膜识别
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987369
Min Ren, Caiyong Wang, Yunlong Wang, Zhenan Sun, T. Tan
Iris recognition is a reliable personal identification method but there is still much room to improve its accuracy especially in less-constrained situations. For example, free movement of head pose may cause large rotation difference between iris images. And illumination variations may cause irregular distortion of iris texture. To match intra-class iris images with head rotation robustly, the existing solutions usually need a precise alignment operation by exhaustive search within a determined range in iris image preprosessing or brute-force searching the minimum Hamming distance in iris feature matching. In the wild enviroments, iris rotation is of much greater uncertainty than that in constrained situations and exhaustive search within a determined range is impracticable. This paper presents a unified feature-level solution to both alignment free and distortion robust iris recognition in the wild. A new deep learning based method named Alignment Free Iris Network (AFINet) is proposed, which utilizes a trainable VLAD (Vector of Locally Aggregated Descriptors) encoder called NetVLAD [18] to decouple the correlations between local representations and their spatial positions. And deformable convolution [5] is leveraged to overcome iris texture distortion by dense adaptive sampling. The results of extensive experiments on three public iris image databases and the simulated degradation databases show that AFINet significantly outperforms state-of-art iris recognition methods.
虹膜识别是一种可靠的个人身份识别方法,但其准确性仍有很大的提高空间,特别是在约束较少的情况下。例如,头部姿势的自由移动可能会导致虹膜图像之间的旋转差异较大。光照变化会导致虹膜纹理不规则变形。为了对具有头部旋转的类内虹膜图像进行鲁棒匹配,现有的解决方案通常需要在虹膜图像预处理中在确定范围内穷举搜索进行精确的对准操作,或者在虹膜特征匹配中暴力搜索最小汉明距离。在野生环境中,虹膜旋转的不确定性比在约束条件下要大得多,在确定的范围内进行穷举搜索是不可行的。本文提出了一种统一的特征级解决方案来实现无对齐和失真的虹膜识别。提出了一种基于深度学习的自由对齐虹膜网络(Alignment Free Iris Network, AFINet)方法,该方法利用可训练的局部聚合描述子向量(Vector of local Aggregated Descriptors)编码器NetVLAD[18]来解耦局部表示与其空间位置之间的相关性。利用可变形卷积[5]通过密集自适应采样克服虹膜纹理失真。在三个公共虹膜图像数据库和模拟退化数据库上的大量实验结果表明,AFINet显著优于当前的虹膜识别方法。
{"title":"Alignment Free and Distortion Robust Iris Recognition","authors":"Min Ren, Caiyong Wang, Yunlong Wang, Zhenan Sun, T. Tan","doi":"10.1109/ICB45273.2019.8987369","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987369","url":null,"abstract":"Iris recognition is a reliable personal identification method but there is still much room to improve its accuracy especially in less-constrained situations. For example, free movement of head pose may cause large rotation difference between iris images. And illumination variations may cause irregular distortion of iris texture. To match intra-class iris images with head rotation robustly, the existing solutions usually need a precise alignment operation by exhaustive search within a determined range in iris image preprosessing or brute-force searching the minimum Hamming distance in iris feature matching. In the wild enviroments, iris rotation is of much greater uncertainty than that in constrained situations and exhaustive search within a determined range is impracticable. This paper presents a unified feature-level solution to both alignment free and distortion robust iris recognition in the wild. A new deep learning based method named Alignment Free Iris Network (AFINet) is proposed, which utilizes a trainable VLAD (Vector of Locally Aggregated Descriptors) encoder called NetVLAD [18] to decouple the correlations between local representations and their spatial positions. And deformable convolution [5] is leveraged to overcome iris texture distortion by dense adaptive sampling. The results of extensive experiments on three public iris image databases and the simulated degradation databases show that AFINet significantly outperforms state-of-art iris recognition methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126885213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2019 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1