首页 > 最新文献

2019 International Conference on Biometrics (ICB)最新文献

英文 中文
Domain Adaptation in Multi-Channel Autoencoder based Features for Robust Face Anti-Spoofing 基于多通道自编码器特征的域自适应鲁棒人脸抗欺骗
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987247
O. Nikisins, Anjith George, S. Marcel
While the performance of face recognition systems has improved significantly in the last decade, they are proved to be highly vulnerable to presentation attacks (spoofing). Most of the research in the field of face presentation attack detection (PAD), was focused on boosting the performance of the systems within a single database. Face PAD datasets are usually captured with RGB cameras, and have very limited number of both bona-fide samples and presentation attack instruments. Training face PAD systems on such data leads to poor performance, even in the closed-set scenario, especially when sophisticated attacks are involved. We explore two paths to boost the performance of the face PAD system against challenging attacks. First, by using multichannel (RGB, Depth and NIR) data, which is still easily accessible in a number of mass production devices. Second, we develop a novel Autoencoders + MLP based face PAD algorithm. Moreover, instead of collecting more data for training of the proposed deep architecture, the domain adaptation technique is proposed, transferring the knowledge of facial appearance from RGB to multi-channel domain. We also demonstrate, that learning the features of individual facial regions, is more discriminative than the features learned from an entire face. The proposed system is tested on a very recent publicly available multi-channel PAD database with a wide variety of presentation attacks.
虽然人脸识别系统的性能在过去十年中有了显着提高,但它们被证明极易受到表示攻击(欺骗)。人脸呈现攻击检测(PAD)领域的大部分研究都集中在提高系统在单一数据库中的性能上。Face PAD数据集通常是用RGB相机捕获的,并且真实样本和演示攻击工具的数量都非常有限。在这些数据上训练人脸PAD系统会导致性能不佳,即使在封闭的场景中,特别是涉及复杂攻击时。我们探索了两种途径来提高人脸PAD系统的性能,以抵御具有挑战性的攻击。首先,通过使用多通道(RGB、Depth和NIR)数据,这在许多量产设备中仍然很容易获得。其次,我们开发了一种新的基于Autoencoders + MLP的人脸PAD算法。此外,本文提出了域自适应技术,将人脸外观知识从RGB转移到多通道域,而不是收集更多的数据来训练所提出的深度体系结构。我们还证明,学习单个面部区域的特征比学习整个面部的特征更具辨别性。所提出的系统在最近公开可用的多通道PAD数据库上进行了测试,该数据库具有各种各样的表示攻击。
{"title":"Domain Adaptation in Multi-Channel Autoencoder based Features for Robust Face Anti-Spoofing","authors":"O. Nikisins, Anjith George, S. Marcel","doi":"10.1109/ICB45273.2019.8987247","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987247","url":null,"abstract":"While the performance of face recognition systems has improved significantly in the last decade, they are proved to be highly vulnerable to presentation attacks (spoofing). Most of the research in the field of face presentation attack detection (PAD), was focused on boosting the performance of the systems within a single database. Face PAD datasets are usually captured with RGB cameras, and have very limited number of both bona-fide samples and presentation attack instruments. Training face PAD systems on such data leads to poor performance, even in the closed-set scenario, especially when sophisticated attacks are involved. We explore two paths to boost the performance of the face PAD system against challenging attacks. First, by using multichannel (RGB, Depth and NIR) data, which is still easily accessible in a number of mass production devices. Second, we develop a novel Autoencoders + MLP based face PAD algorithm. Moreover, instead of collecting more data for training of the proposed deep architecture, the domain adaptation technique is proposed, transferring the knowledge of facial appearance from RGB to multi-channel domain. We also demonstrate, that learning the features of individual facial regions, is more discriminative than the features learned from an entire face. The proposed system is tested on a very recent publicly available multi-channel PAD database with a wide variety of presentation attacks.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130883337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
PRNU-based finger vein sensor identification: On the effect of different sensor croppings 基于prnu的指静脉传感器识别:不同传感器裁剪效果的研究
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987237
Dominik Söllinger, Babak Maser, A. Uhl
In this work, we study the applicability of PRNU-based sensor identification methods for finger vein imagery. We also investigate the effect of different image regions on the identification performance by looking at five different crop-pings with different sizes. The proposed method is tested on eight publicly available finger vein datasets. For each finger vein sensor a noise reference pattern is generated and subsequently matched with noise residuals extracted from previously unseen finger vein images. Although the final result strongly encourages the use of PRNU-based approaches for sensor identification, it can also be observed that the choice of image region for PRNU extraction is crucial. The result clearly shows that regions containing biometric trait (varying content) should be preferred over background regions containing non-biometric trait (identical content).
在这项工作中,我们研究了基于prnu的传感器识别方法在手指静脉图像中的适用性。我们还研究了不同图像区域对识别性能的影响,通过观察五种不同大小的裁剪。该方法在8个公开的手指静脉数据集上进行了测试。对于每个手指静脉传感器,生成噪声参考模式,随后与从先前未见过的手指静脉图像中提取的噪声残差进行匹配。尽管最终结果强烈鼓励使用基于PRNU的方法进行传感器识别,但也可以观察到,PRNU提取的图像区域的选择至关重要。结果清楚地表明,包含生物特征(内容不同)的区域应优于包含非生物特征(内容相同)的背景区域。
{"title":"PRNU-based finger vein sensor identification: On the effect of different sensor croppings","authors":"Dominik Söllinger, Babak Maser, A. Uhl","doi":"10.1109/ICB45273.2019.8987237","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987237","url":null,"abstract":"In this work, we study the applicability of PRNU-based sensor identification methods for finger vein imagery. We also investigate the effect of different image regions on the identification performance by looking at five different crop-pings with different sizes. The proposed method is tested on eight publicly available finger vein datasets. For each finger vein sensor a noise reference pattern is generated and subsequently matched with noise residuals extracted from previously unseen finger vein images. Although the final result strongly encourages the use of PRNU-based approaches for sensor identification, it can also be observed that the choice of image region for PRNU extraction is crucial. The result clearly shows that regions containing biometric trait (varying content) should be preferred over background regions containing non-biometric trait (identical content).","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130182380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Audio-Visual Kinship Verification in the Wild 野外亲缘关系视听验证
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987241
Xiaoting Wu, Eric Granger, T. Kinnunen, Xiaoyi Feng, A. Hadid
Kinship verification is a challenging problem, where recognition systems are trained to establish a kin relation between two individuals based on facial images or videos. However, due to variations in capture conditions (background, pose, expression, illumination and occlusion), state-of-the-art systems currently provide a low level of accuracy. As in many visual recognition and affective computing applications, kinship verification may benefit from a combination of discriminant information extracted from both video and audio signals. In this paper, we investigate for the first time the fusion audio-visual information from both face and voice modalities to improve kinship verification accuracy. First, we propose a new multi-modal kinship dataset called TALking KINship (TALKIN), that is comprised of several pairs of video sequences with subjects talking. State-of-the-art conventional and deep learning models are assessed and compared for kinship verification using this dataset. Finally, we propose a deep Siamese network for multi-modal fusion of kinship relations. Experiments with the TALKIN dataset indicate that the proposed Siamese network provides a significantly higher level of accuracy over baseline uni-modal and multi-modal fusion techniques for kinship verification. Results also indicate that audio (vocal) information is complementary and useful for kinship verification problem.
亲属关系验证是一个具有挑战性的问题,其中识别系统被训练以根据面部图像或视频在两个人之间建立亲属关系。然而,由于捕获条件(背景、姿势、表情、照明和遮挡)的变化,目前最先进的系统提供的精度水平较低。在许多视觉识别和情感计算应用中,亲属关系验证可能受益于从视频和音频信号中提取的判别信息的组合。在本文中,我们首次研究了融合视听信息从面部和语音的方式,以提高亲属验证的准确性。首先,我们提出了一个新的多模态亲属关系数据集,称为TALking kinship (TALKIN),该数据集由几对视频序列组成。使用此数据集评估和比较最先进的传统和深度学习模型,以进行亲属关系验证。最后,我们提出了一个深暹罗网络多模态融合的亲属关系。TALKIN数据集的实验表明,所提出的Siamese网络在亲属关系验证方面提供了比基线单模态和多模态融合技术更高的准确性。结果还表明,音频(语音)信息是互补和有用的亲属关系验证问题。
{"title":"Audio-Visual Kinship Verification in the Wild","authors":"Xiaoting Wu, Eric Granger, T. Kinnunen, Xiaoyi Feng, A. Hadid","doi":"10.1109/ICB45273.2019.8987241","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987241","url":null,"abstract":"Kinship verification is a challenging problem, where recognition systems are trained to establish a kin relation between two individuals based on facial images or videos. However, due to variations in capture conditions (background, pose, expression, illumination and occlusion), state-of-the-art systems currently provide a low level of accuracy. As in many visual recognition and affective computing applications, kinship verification may benefit from a combination of discriminant information extracted from both video and audio signals. In this paper, we investigate for the first time the fusion audio-visual information from both face and voice modalities to improve kinship verification accuracy. First, we propose a new multi-modal kinship dataset called TALking KINship (TALKIN), that is comprised of several pairs of video sequences with subjects talking. State-of-the-art conventional and deep learning models are assessed and compared for kinship verification using this dataset. Finally, we propose a deep Siamese network for multi-modal fusion of kinship relations. Experiments with the TALKIN dataset indicate that the proposed Siamese network provides a significantly higher level of accuracy over baseline uni-modal and multi-modal fusion techniques for kinship verification. Results also indicate that audio (vocal) information is complementary and useful for kinship verification problem.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130346253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
PPG2Live: Using dual PPG for active authentication and liveness detection PPG2Live:使用双PPG进行活动认证和活动检测
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987330
Jan Spooren, D. Preuveneers, W. Joosen
This paper presents a novel solution based on PPG to strengthen face authentication. Our method leverages and combines different PPG signals from multiple channels to meet two objectives. First, it complements face authentication with an additional authentication factor, and second, it strengthens the liveness detection to be more resistant against presentation attacks. Our solution can be implemented as an unlock screen for mobile phones having a front-side and back-side camera or a paired smart watch as well as for webcam-enabled laptops augmented with a PPG sensor. Our evaluation shows that our method can significantly improve the resilience against presentation attacks for face recognition-based user authentication.
提出了一种基于PPG的人脸认证强化方案。我们的方法利用并结合来自多个通道的不同PPG信号来满足两个目标。首先,它用一个额外的身份验证因素补充了人脸身份验证;其次,它加强了活体检测,使其更能抵抗表示攻击。我们的解决方案可以作为具有前后摄像头的手机或配对智能手表的解锁屏幕,以及带有PPG传感器的具有网络摄像头功能的笔记本电脑。我们的评估表明,我们的方法可以显着提高基于人脸识别的用户身份验证对表示攻击的弹性。
{"title":"PPG2Live: Using dual PPG for active authentication and liveness detection","authors":"Jan Spooren, D. Preuveneers, W. Joosen","doi":"10.1109/ICB45273.2019.8987330","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987330","url":null,"abstract":"This paper presents a novel solution based on PPG to strengthen face authentication. Our method leverages and combines different PPG signals from multiple channels to meet two objectives. First, it complements face authentication with an additional authentication factor, and second, it strengthens the liveness detection to be more resistant against presentation attacks. Our solution can be implemented as an unlock screen for mobile phones having a front-side and back-side camera or a paired smart watch as well as for webcam-enabled laptops augmented with a PPG sensor. Our evaluation shows that our method can significantly improve the resilience against presentation attacks for face recognition-based user authentication.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114258378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
ScleraSegNet: an Improved U-Net Model with Attention for Accurate Sclera Segmentation 基于关注的改进U-Net模型的巩膜精确分割
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987270
Caiyong Wang, Yong He, Yunfan Liu, Zhaofeng He, R. He, Zhenan Sun
Accurate sclera segmentation is critical for successful sclera recognition. However, studies on sclera segmentation algorithms are still limited in the literature. In this paper, we propose a novel sclera segmentation method based on the improved U-Net model, named as ScleraSegNet. We perform in-depth analysis regarding the structure of U-Net model, and propose to embed an attention module into the central bottleneck part between the contracting path and the expansive path of U-Net to strengthen the ability of learning discriminative representations. We compare different attention modules and find that channel-wise attention is the most effective in improving the performance of the segmentation network. Besides, we evaluate the effectiveness of data augmentation process in improving the generalization ability of the segmentation network. Experiment results show that the best performing configuration of the proposed method achieves state-of-the-art performance with F-measure values of 91.43%, 89.54% on UBIRIS.v2 and MICHE, respectively.
准确的巩膜分割是巩膜识别成功的关键。然而,关于巩膜分割算法的研究在文献中仍然有限。本文基于改进的U-Net模型,提出了一种新的巩膜分割方法,称为ScleraSegNet。我们对U-Net模型的结构进行了深入的分析,提出在U-Net的收缩路径和扩张路径之间的中心瓶颈部分嵌入一个注意模块,以增强U-Net学习判别表征的能力。我们比较了不同的注意力模块,发现渠道型注意力在提高分割网络性能方面是最有效的。此外,我们还评估了数据增强过程在提高分割网络泛化能力方面的有效性。实验结果表明,该方法的最佳配置在UBIRIS上的f测量值分别为91.43%和89.54%,达到了最先进的性能。v2和MICHE。
{"title":"ScleraSegNet: an Improved U-Net Model with Attention for Accurate Sclera Segmentation","authors":"Caiyong Wang, Yong He, Yunfan Liu, Zhaofeng He, R. He, Zhenan Sun","doi":"10.1109/ICB45273.2019.8987270","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987270","url":null,"abstract":"Accurate sclera segmentation is critical for successful sclera recognition. However, studies on sclera segmentation algorithms are still limited in the literature. In this paper, we propose a novel sclera segmentation method based on the improved U-Net model, named as ScleraSegNet. We perform in-depth analysis regarding the structure of U-Net model, and propose to embed an attention module into the central bottleneck part between the contracting path and the expansive path of U-Net to strengthen the ability of learning discriminative representations. We compare different attention modules and find that channel-wise attention is the most effective in improving the performance of the segmentation network. Besides, we evaluate the effectiveness of data augmentation process in improving the generalization ability of the segmentation network. Experiment results show that the best performing configuration of the proposed method achieves state-of-the-art performance with F-measure values of 91.43%, 89.54% on UBIRIS.v2 and MICHE, respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125121955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Authenticating Phone Users Using a Gait-Based Histogram Approach on Mobile App Sessions 在移动应用会话中使用基于步态的直方图方法验证手机用户
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987418
T. Neal, M. A. Noor, P. Gera, Khadija Zanna, G. Kaptan
Collectively, user-friendly interfaces, small but impactful sensing technologies, intuitive device designs, and the variety of mobile applications (or apps) have transformed the expectations for cellular phones. Apps are a primary factor in device functionality; they allow users to quickly carry out tasks directly on their device. This paper leverages mobile apps for continuous authentication of mobile device users. We borrow from a gait-based approach by continuously extracting n-bin histograms from numerically encoded app data. Since more active subjects will generate more data, it would be trivial to distinguish between these subjects and others which are not as active. Thus, we divided a dataset of 19 months of app data from 181 subjects into three datasets to determine if minimally active, moderately active, or very active subjects were more challenging to authenticate. Using the absolute distance between two histograms, our approach yielded a worst-case EER of 0.188 and a best-case EER of 0.036 with a worst-case initial training period of 1.06 hours. We also show a positive correlation between user activity level and performance, and template size and performance. Our method is characterized by minimal training samples and a context-independent evaluation, addressing important factors which are known to affect the practicality of continuous authentication systems.
总的来说,用户友好的界面、小型但有影响力的传感技术、直观的设备设计以及各种移动应用程序(或应用程序)已经改变了人们对手机的期望。应用程序是设备功能的主要因素;它们允许用户直接在他们的设备上快速执行任务。本文利用移动应用程序对移动设备用户进行持续认证。我们借鉴了基于步态的方法,从数字编码的应用程序数据中连续提取n bin直方图。由于更活跃的主体将产生更多的数据,因此区分这些主体和其他不那么活跃的主体将是微不足道的。因此,我们将来自181名受试者的19个月的应用程序数据集划分为三个数据集,以确定最低活动、中等活动或非常活跃的受试者是否更具挑战性。利用两个直方图之间的绝对距离,我们的方法产生了最坏情况下的EER为0.188,最佳情况下的EER为0.036,最坏情况下的初始训练时间为1.06小时。我们还展示了用户活动水平和性能、模板大小和性能之间的正相关关系。我们的方法的特点是最小的训练样本和上下文独立的评估,解决了已知的影响连续认证系统实用性的重要因素。
{"title":"Authenticating Phone Users Using a Gait-Based Histogram Approach on Mobile App Sessions","authors":"T. Neal, M. A. Noor, P. Gera, Khadija Zanna, G. Kaptan","doi":"10.1109/ICB45273.2019.8987418","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987418","url":null,"abstract":"Collectively, user-friendly interfaces, small but impactful sensing technologies, intuitive device designs, and the variety of mobile applications (or apps) have transformed the expectations for cellular phones. Apps are a primary factor in device functionality; they allow users to quickly carry out tasks directly on their device. This paper leverages mobile apps for continuous authentication of mobile device users. We borrow from a gait-based approach by continuously extracting n-bin histograms from numerically encoded app data. Since more active subjects will generate more data, it would be trivial to distinguish between these subjects and others which are not as active. Thus, we divided a dataset of 19 months of app data from 181 subjects into three datasets to determine if minimally active, moderately active, or very active subjects were more challenging to authenticate. Using the absolute distance between two histograms, our approach yielded a worst-case EER of 0.188 and a best-case EER of 0.036 with a worst-case initial training period of 1.06 hours. We also show a positive correlation between user activity level and performance, and template size and performance. Our method is characterized by minimal training samples and a context-independent evaluation, addressing important factors which are known to affect the practicality of continuous authentication systems.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121914155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fingerprint Quality: Mapping NFIQ1 Classes and NFIQ2 Values 指纹质量:映射NFIQ1类和NFIQ2值
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987244
Javier Galbally, Rudolf Haraksim, P. Ferrara, Laurent Beslay, Elham Tabassi
Over the last two decades of biometric research, it has been shown in numerous occasions the key impact that the quality of biometric samples has on the performance of biometric recognition systems. Few other biometric characteristics, if any, have been analysed so in depth from a quality perspective than fingerprints. This has been largely due to the development by the US NIST of two successive systemindependent metrics that have become a standard to estimate fingerprint quality: NFIQ1 and NFIQ2. However, in spite of their unquestionable influence in the development of fingerprint technology, there is still a lack of understanding of how these two metrics relate to each other. The present article is an attempt to bridge this gap, presenting new insight into the meaningfulness of both metrics, and describing a mapping function between NFIQ2 values and NFIQ1 classes.
在过去二十年的生物识别研究中,生物识别样本的质量对生物识别系统的性能有着重要的影响,这在许多场合都得到了证明。从质量的角度来看,很少有其他生物特征,如果有的话,被如此深入地分析过。这在很大程度上是由于美国国家标准与技术研究所(NIST)相继开发了两个独立于系统的指标,它们已成为估计指纹质量的标准:NFIQ1和NFIQ2。然而,尽管它们在指纹技术的发展中有着毋庸置疑的影响,但人们仍然缺乏对这两个指标如何相互关联的理解。本文试图弥合这一差距,对这两个指标的意义提出新的见解,并描述NFIQ2值和NFIQ1类之间的映射函数。
{"title":"Fingerprint Quality: Mapping NFIQ1 Classes and NFIQ2 Values","authors":"Javier Galbally, Rudolf Haraksim, P. Ferrara, Laurent Beslay, Elham Tabassi","doi":"10.1109/ICB45273.2019.8987244","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987244","url":null,"abstract":"Over the last two decades of biometric research, it has been shown in numerous occasions the key impact that the quality of biometric samples has on the performance of biometric recognition systems. Few other biometric characteristics, if any, have been analysed so in depth from a quality perspective than fingerprints. This has been largely due to the development by the US NIST of two successive systemindependent metrics that have become a standard to estimate fingerprint quality: NFIQ1 and NFIQ2. However, in spite of their unquestionable influence in the development of fingerprint technology, there is still a lack of understanding of how these two metrics relate to each other. The present article is an attempt to bridge this gap, presenting new insight into the meaningfulness of both metrics, and describing a mapping function between NFIQ2 values and NFIQ1 classes.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129626296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Thermal and Cross-spectral Palm Image Matching in the Visual Domain by Robust Image Transformation 基于鲁棒图像变换的手掌热与交叉光谱图像视觉匹配
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987435
Ewelina Bartuzi, N. Damer
Synthesizing visual-like images from those captured in the thermal spectrum allows for direct cross-domain comparisons. Moreover, it enables thermal-to-thermal comparisons that take advantage of feature extraction methodologies developed for the visual domain. Hand based biometrics are socially accepted and can operate in a touchless mode. However, certain deployment scenarios requires captures in non-visual spectrums due to impractical illumination requirements. Generating visual-like palm images from thermal ones faces challenges related to the nature of hand biometrics. Such challenges are the dynamic nature of the hand and the difficulties in accurately aligning hand’s scale and rotation, especially in the understudied thermal domain. Building such a synthetic solution is also challenged by the lack of large-scale databases that contain images collected in both spectra, as well as generating images of appropriate resolutions. Driven by these challenges, this paper presents a novel solution to transfer thermal palm images into high-quality visual-like images, regardless of the limited training data, or scale and rotational variations. We proved quality similarity and high correlation of the generated images to the original visual images. We used the synthesized images within verification approaches based on CNN and hand crafted-features. This allowed significantly improved the cross-spectral and thermal-to-thermal verification performances, reducing the EER from 37.12% to 16.25% and from 3.04% to 1.65%, respectively in both cases when using CNN-based features.
从热光谱中捕获的图像合成类似视觉的图像可以进行直接的跨域比较。此外,它可以利用为视觉领域开发的特征提取方法进行热对热比较。基于手部的生物识别技术被社会所接受,并且可以在非触摸模式下操作。然而,由于不切实际的照明要求,某些部署场景需要在非可视光谱中捕获。从热图像中生成类似视觉的手掌图像面临着与手部生物识别特性相关的挑战。这些挑战是手的动态特性以及准确调整手的尺度和旋转的困难,特别是在尚未研究的热领域。构建这样的合成解决方案还面临着缺乏包含两种光谱图像的大规模数据库以及生成适当分辨率的图像的挑战。在这些挑战的驱动下,本文提出了一种新的解决方案,将热手掌图像转换为高质量的视觉图像,而不考虑有限的训练数据,或规模和旋转变化。我们证明了生成的图像与原始视觉图像具有高质量的相似度和高相关性。我们在基于CNN和手工制作特征的验证方法中使用了合成图像。这使得交叉光谱和热-热验证性能得到了显著提高,在使用基于cnn的特征时,EER分别从37.12%降低到16.25%和从3.04%降低到1.65%。
{"title":"Thermal and Cross-spectral Palm Image Matching in the Visual Domain by Robust Image Transformation","authors":"Ewelina Bartuzi, N. Damer","doi":"10.1109/ICB45273.2019.8987435","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987435","url":null,"abstract":"Synthesizing visual-like images from those captured in the thermal spectrum allows for direct cross-domain comparisons. Moreover, it enables thermal-to-thermal comparisons that take advantage of feature extraction methodologies developed for the visual domain. Hand based biometrics are socially accepted and can operate in a touchless mode. However, certain deployment scenarios requires captures in non-visual spectrums due to impractical illumination requirements. Generating visual-like palm images from thermal ones faces challenges related to the nature of hand biometrics. Such challenges are the dynamic nature of the hand and the difficulties in accurately aligning hand’s scale and rotation, especially in the understudied thermal domain. Building such a synthetic solution is also challenged by the lack of large-scale databases that contain images collected in both spectra, as well as generating images of appropriate resolutions. Driven by these challenges, this paper presents a novel solution to transfer thermal palm images into high-quality visual-like images, regardless of the limited training data, or scale and rotational variations. We proved quality similarity and high correlation of the generated images to the original visual images. We used the synthesized images within verification approaches based on CNN and hand crafted-features. This allowed significantly improved the cross-spectral and thermal-to-thermal verification performances, reducing the EER from 37.12% to 16.25% and from 3.04% to 1.65%, respectively in both cases when using CNN-based features.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117145550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SANet: Smoothed Attention Network for Single Stage Face Detector 基于单阶段人脸检测器的平滑注意网络
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987285
Lei Shi, Xiang Xu, I. Kakadiaris
Recently, significant effort has been devoted to exploring the role of feature fusion and enriching contextual information on detecting multi-scale faces. However, simply integrating features of different levels could lead to introducing significant noise. Moreover, recently proposed approaches of enriching contextual information are not efficient or ignore the gridding artifacts produced by dilated convolution. To tackle these issues, we developed a smoothed attention network (dubbed SANet), which introduces an Attention-guided Feature Fusion Module (AFFM) and a Smoothed Context Enhancement Module (SCEM). In particular, the AFFM applies an attention module to high-level semantic features and fuses attention-focused features with low-level semantic features to reduce the noise of the fused feature map. The SCEM stacks dilated convolution and convolution layers alternately to re-learn the relationship among completely separate sets of units produced by dilated convolution to maintain consistency of local information. The SANet achieves promising results on the WIDER FACE validation and testing datasets and is state-of-the-art on the UFDD dataset.
近年来,人们致力于探索特征融合和丰富上下文信息在多尺度人脸检测中的作用。然而,简单地整合不同级别的特征可能会导致引入显著的噪声。此外,最近提出的丰富上下文信息的方法效率不高或忽略了扩张卷积产生的网格伪影。为了解决这些问题,我们开发了一个平滑注意力网络(称为SANet),其中引入了一个注意引导特征融合模块(AFFM)和一个平滑上下文增强模块(SCEM)。其中,AFFM对高级语义特征应用了注意模块,将注意力集中的特征与低级语义特征进行融合,降低融合特征映射的噪声。SCEM将扩展卷积和卷积层交替堆叠,重新学习扩展卷积产生的完全独立的单元集之间的关系,以保持局部信息的一致性。SANet在wide FACE验证和测试数据集上取得了可喜的结果,在UFDD数据集上也是最先进的。
{"title":"SANet: Smoothed Attention Network for Single Stage Face Detector","authors":"Lei Shi, Xiang Xu, I. Kakadiaris","doi":"10.1109/ICB45273.2019.8987285","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987285","url":null,"abstract":"Recently, significant effort has been devoted to exploring the role of feature fusion and enriching contextual information on detecting multi-scale faces. However, simply integrating features of different levels could lead to introducing significant noise. Moreover, recently proposed approaches of enriching contextual information are not efficient or ignore the gridding artifacts produced by dilated convolution. To tackle these issues, we developed a smoothed attention network (dubbed SANet), which introduces an Attention-guided Feature Fusion Module (AFFM) and a Smoothed Context Enhancement Module (SCEM). In particular, the AFFM applies an attention module to high-level semantic features and fuses attention-focused features with low-level semantic features to reduce the noise of the fused feature map. The SCEM stacks dilated convolution and convolution layers alternately to re-learn the relationship among completely separate sets of units produced by dilated convolution to maintain consistency of local information. The SANet achieves promising results on the WIDER FACE validation and testing datasets and is state-of-the-art on the UFDD dataset.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126531655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi-Modal Fingerprint Presentation Attack Detection: Analysing the Surface and the Inside 多模态指纹表示攻击检测:分析表面和内部
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987260
M. Gomez-Barrero, Jascha Kolberg, C. Busch
The deployment of biometric recognition systems has seen a considerable increase over the last decade, in particular for fingerprint based systems. To tackle the security issues derived from presentation attacks launched on the biometric capture device, automatic presentation attack detection (PAD) methods have been proposed. In spite of their high detection rates on the LivDet databases, the vast majority of the methods rely on the samples provided by traditional capture devices, which may fail to detect more sophisticated presentation attack instrument (PAI) species. In this paper, we propose a multi-modal fingerprint PAD which relies on an analysis of: i) the surface of the finger within the short wave infrared (SWIR) spectrum, and ii) the inside of the finger thanks to the laser speckle contrast imaging (LSCI) technology. On the experimental evaluation over a database comprising more than 4700 samples and 35 PAI species, and including unknown attacks to model a realistic scenario, a Detection Equal Error Rate (D-EER) of 0.5% has been achieved. Moreover, for a BPCER ≤ 0.1% (i.e., highly convenient system), the APCER remains around 3%.
生物识别系统的部署在过去十年中有了相当大的增长,特别是基于指纹的系统。为了解决生物识别捕获设备上的表示攻击所带来的安全问题,提出了自动表示攻击检测方法。尽管它们在LivDet数据库上的检测率很高,但绝大多数方法依赖于传统捕获设备提供的样本,这可能无法检测到更复杂的呈现攻击工具(PAI)物种。在本文中,我们提出了一种多模态指纹PAD,它依赖于分析:i)手指表面在短波红外(SWIR)光谱内,ii)由于激光散斑对比成像(LSCI)技术,手指内部。通过对包含4700多个样本和35种PAI物种的数据库进行实验评估,并包括未知攻击来模拟现实场景,实现了0.5%的检测等错误率(D-EER)。此外,对于BPCER≤0.1%(即高度方便的系统),APCER保持在3%左右。
{"title":"Multi-Modal Fingerprint Presentation Attack Detection: Analysing the Surface and the Inside","authors":"M. Gomez-Barrero, Jascha Kolberg, C. Busch","doi":"10.1109/ICB45273.2019.8987260","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987260","url":null,"abstract":"The deployment of biometric recognition systems has seen a considerable increase over the last decade, in particular for fingerprint based systems. To tackle the security issues derived from presentation attacks launched on the biometric capture device, automatic presentation attack detection (PAD) methods have been proposed. In spite of their high detection rates on the LivDet databases, the vast majority of the methods rely on the samples provided by traditional capture devices, which may fail to detect more sophisticated presentation attack instrument (PAI) species. In this paper, we propose a multi-modal fingerprint PAD which relies on an analysis of: i) the surface of the finger within the short wave infrared (SWIR) spectrum, and ii) the inside of the finger thanks to the laser speckle contrast imaging (LSCI) technology. On the experimental evaluation over a database comprising more than 4700 samples and 35 PAI species, and including unknown attacks to model a realistic scenario, a Detection Equal Error Rate (D-EER) of 0.5% has been achieved. Moreover, for a BPCER ≤ 0.1% (i.e., highly convenient system), the APCER remains around 3%.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132677229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
2019 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1