首页 > 最新文献

2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Linking face images captured from the optical phenomenon in the wild for forensic science 连接从野外光学现象中捕获的面部图像,用于法医科学
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272770
Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein
This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.
本文讨论了使用从野外光学现象中捕获的一些具有挑战性的面部图像场景用于法医目的的可能性,以实现个人识别。在监控场景中,被遮挡和被遮挡的人脸图像可以通过其在监控摄像头覆盖的周围玻璃或光滑墙壁上的反射来收集,并且可以将这种人脸图像场景链接起来用于法医目的。另一个类似的场景也可以用于法医,那就是一个人站在透明玻璃墙后面的面部图像。为了研究这些图像的个人识别能力,本研究进行了。这项工作调查了文献中使用的不同类型的特征,以建立这种退化的人脸图像的个人识别。其中,基于局部区域的特色效果最好。为了获得更高的精度和更好的面部特征,人脸图像沿着其封闭的边界框进行手动裁剪,并进行噪声去除(反射等)。为了进行实验,我们开发了一个考虑到上述场景的数据库,该数据库将公开供学术研究使用。初步调查证实了将这些人脸图像用于法医目的的可能性。
{"title":"Linking face images captured from the optical phenomenon in the wild for forensic science","authors":"Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein","doi":"10.1109/BTAS.2017.8272770","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272770","url":null,"abstract":"This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123653686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fingerprint indexing based on pyramid deep convolutional feature 基于金字塔深度卷积特征的指纹索引
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272699
Dehua Song, Jufu Feng
The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.
指纹脊包含了大量的判别信息,可用于指纹索引,但基于规则的方法由于脊的非线性失真,难以描述脊的结构。本文研究了用深度卷积神经网络(Deep Convolutional Neural Network, DCNN)表示脊状结构的方法。索引方法将指纹图像划分为越来越细的子区域,并通过DCNN从每个子区域提取特征,形成金字塔深度卷积特征,以表示全局模式和局部细节(特别是细枝末节)。大量的实验结果表明,该方法在精度和效率上都优于其他常用的索引方法。最后,利用遮挡敏感性、可视化和指纹重建技术来探索哪些脊属性在深度卷积特征中被描述。
{"title":"Fingerprint indexing based on pyramid deep convolutional feature","authors":"Dehua Song, Jufu Feng","doi":"10.1109/BTAS.2017.8272699","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272699","url":null,"abstract":"The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129356256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
LivDet iris 2017 — Iris liveness detection competition 2017 LivDet虹膜2017 -虹膜活性检测大赛2017
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272763
David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan
Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.
展示攻击,例如使用带有打印图案的隐形眼镜或虹膜的打印输出,可以用来绕过生物识别安全系统。首届国际虹膜活性竞赛于2013年启动,旨在评估呈现攻击检测(PAD)算法的性能,第二届竞赛于2015年举行。本文介绍了第三届竞赛livet - iris 2017的结果。提出了三种基于软件的表示攻击检测方法。通过额外的交叉传感器测试测试了四个实时和欺骗图像数据集。新的数据集和新的数据情况导致本次比赛比以往的比赛具有更高的难度。匿名的效果最好,活体样本的拒绝率为3.36%,欺骗样本的接受率为14.71%。结果表明,即使有了进步,打印虹膜攻击和图案隐形眼镜仍然很难被基于软件的系统检测到。与图案隐形眼镜相比,打印的虹膜图像更容易与实时图像区分开来,这在之前的比赛中也可以看到。
{"title":"LivDet iris 2017 — Iris liveness detection competition 2017","authors":"David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan","doi":"10.1109/BTAS.2017.8272763","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272763","url":null,"abstract":"Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Face morphing versus face averaging: Vulnerability and detection 面部变形与面部平均:脆弱性与检测
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272742
Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch
The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.
众所周知,人脸识别系统(FRS)很容易受到使用变形脸的攻击。由于电子护照(ePass)必须使用人脸特征,变形攻击引起了边境安全的潜在担忧。在本文中,我们分析了使用平均面进行的新攻击的FRS的脆弱性。平均人脸是通过对对应于两个不同受试者的两张人脸图像进行简单的像素级平均生成的。我们对商用人脸识别系统在传统的变形攻击和基于平均的人脸攻击下的脆弱性进行了基准测试。我们进一步提出了一种基于从颜色空间中提取的微纹理特征的协同表示的新算法,以可靠地检测对FRS的变形和平均人脸攻击。数据库是通过考虑护照签发的实际场景来构建的,该场景通常接受申请人打印的护照照片,该照片被进一步扫描并存储在ePass中。因此,新构建的数据库包含打印扫描的真实,变形和平均面部样本。实验结果表明,该方法在打印扫描的变形和平均人脸数据库上具有较好的性能。
{"title":"Face morphing versus face averaging: Vulnerability and detection","authors":"Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch","doi":"10.1109/BTAS.2017.8272742","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272742","url":null,"abstract":"The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128125054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Continuous heart rate measurement from face: A robust rPPG approach with distribution learning 面部连续心率测量:基于分布学习的鲁棒rPPG方法
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272752
Xuesong Niu, Hu Han, S. Shan, Xilin Chen
Non-contact heart rate (HR) measurement via remote photoplethysmography (rPPG) has drawn increasing attention. While a number of methods have been reported, most of them did not take into account the continuous HR measurement problem, which is more challenging due to limited observed video frames and the requirement of speed. In this paper, we present a real-time rPPG method for continuous HR measurement from face videos. We use a multi-patch ROI strategy to remove outlier signals. Chrominance feature is then generated from each ROI to reduce the color channel magnitude differences, which is followed by temporal filtering to suppress the artifacts. In addition, considering the temporal relationship of neighboring HR rhythms, we learn a HR distribution based on historical HR measurements, and apply it to the succeeding HR estimations. Experiment results on the public-domain MAHNOB-HCI database and user tests with commodity webcams show the effectiveness of the proposed approach.
远程光电容积脉搏波(rPPG)非接触式心率(HR)测量越来越受到人们的关注。虽然已经报道了许多方法,但大多数方法都没有考虑到连续的HR测量问题,由于观察到的视频帧有限和速度要求,这更具挑战性。在本文中,我们提出了一种实时rPPG方法,用于从人脸视频中连续测量HR。我们使用多补丁ROI策略来去除异常信号。然后从每个ROI生成色度特征以减小颜色通道的幅度差异,然后进行时间滤波以抑制伪影。此外,考虑到相邻人力资源节奏的时间关系,我们学习了基于历史人力资源测量的人力资源分布,并将其应用于后续的人力资源估计。在公共领域MAHNOB-HCI数据库上的实验结果和商用网络摄像头的用户测试表明了该方法的有效性。
{"title":"Continuous heart rate measurement from face: A robust rPPG approach with distribution learning","authors":"Xuesong Niu, Hu Han, S. Shan, Xilin Chen","doi":"10.1109/BTAS.2017.8272752","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272752","url":null,"abstract":"Non-contact heart rate (HR) measurement via remote photoplethysmography (rPPG) has drawn increasing attention. While a number of methods have been reported, most of them did not take into account the continuous HR measurement problem, which is more challenging due to limited observed video frames and the requirement of speed. In this paper, we present a real-time rPPG method for continuous HR measurement from face videos. We use a multi-patch ROI strategy to remove outlier signals. Chrominance feature is then generated from each ROI to reduce the color channel magnitude differences, which is followed by temporal filtering to suppress the artifacts. In addition, considering the temporal relationship of neighboring HR rhythms, we learn a HR distribution based on historical HR measurements, and apply it to the succeeding HR estimations. Experiment results on the public-domain MAHNOB-HCI database and user tests with commodity webcams show the effectiveness of the proposed approach.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129052157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
On the feasibility of creating morphed iris-codes 论创建变形虹膜编码的可行性
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272693
C. Rathgeb, C. Busch
Morphing techniques can be used to create artificial biometric samples, which resemble the biometric information of two (or more) individuals in image and feature domain. If morphed biometric images or templates are infiltrated to a biometric recognition system the subjects contributing to the morphed image will both (or all) be successfully verified against a single enrolled template. Hence, the unique link between individuals and their biometric reference data is annulled. The vulnerability of face and fingerprint recognition systems to such morphing attacks has been assessed in the recent past. In this paper we investigate the feasibility of morphing iris-codes. Two relevant attack scenarios are discussed and a scheme for morphing pairs of iris-codes depending on the expected stability of their bits is proposed. Different iris recognition systems, which accept comparison scores at a recommended Hamming distance of 0.32, are shown to be vulnerable to attacks based on the presented morphing technique.
变形技术可以用于创建人工生物特征样本,这些样本在图像和特征域上类似于两个(或更多)个体的生物特征信息。如果将变形的生物特征图像或模板渗透到生物特征识别系统中,则对变形图像做出贡献的受试者将根据单个注册模板成功验证两者(或全部)。因此,个人与其生物特征参考数据之间的独特联系被取消了。人脸和指纹识别系统对这种变形攻击的脆弱性在最近已经进行了评估。本文研究了虹膜编码变形的可行性。讨论了两种相关的攻击场景,并提出了一种根据虹膜码对的期望稳定性对其进行变形的方案。不同的虹膜识别系统,在推荐的汉明距离为0.32时接受比较分数,表明容易受到基于所提出的变形技术的攻击。
{"title":"On the feasibility of creating morphed iris-codes","authors":"C. Rathgeb, C. Busch","doi":"10.1109/BTAS.2017.8272693","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272693","url":null,"abstract":"Morphing techniques can be used to create artificial biometric samples, which resemble the biometric information of two (or more) individuals in image and feature domain. If morphed biometric images or templates are infiltrated to a biometric recognition system the subjects contributing to the morphed image will both (or all) be successfully verified against a single enrolled template. Hence, the unique link between individuals and their biometric reference data is annulled. The vulnerability of face and fingerprint recognition systems to such morphing attacks has been assessed in the recent past. In this paper we investigate the feasibility of morphing iris-codes. Two relevant attack scenarios are discussed and a scheme for morphing pairs of iris-codes depending on the expected stability of their bits is proposed. Different iris recognition systems, which accept comparison scores at a recommended Hamming distance of 0.32, are shown to be vulnerable to attacks based on the presented morphing technique.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116958277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
On fine-tuning convolutional neural networks for smartphone based ocular recognition 基于智能手机的精细卷积神经网络视觉识别
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272767
A. Rattani, R. Derakhshani
Recent reported advances in smartphone based ocular biometric recognition in visible spectrum demonstrated the efficacy of deep-learning schemes. In this paper, we evaluate convolutional neural networks (CNNs) pretrained for large scale object recognition, namely VGG-16, VGG-19, InceptionNet and ResNet, and fine-tuned for ocular recognition using RGB images captured by smartphones. Fine-tuning pretrained CNN models is advantageous in case of insufficient training data, and the partial training is faster compared to custom CNN trained from scratch. Experiments on VISOB dataset yielded TPR of up to 100% at FPR of 10−4 using VGG-16 model fine-tuned for ocular recognition.
最近报道的基于智能手机的可见光谱眼生物识别的进展证明了深度学习方案的有效性。在本文中,我们评估了卷积神经网络(cnn)对大规模目标识别的预训练,即VGG-16, VGG-19, InceptionNet和ResNet,并使用智能手机捕获的RGB图像对眼部识别进行微调。在训练数据不足的情况下,微调预训练的CNN模型是有利的,部分训练比从头开始训练的自定义CNN更快。在VISOB数据集上的实验中,使用经过微调的VGG-16模型进行眼识别,在FPR为10−4的情况下,TPR高达100%。
{"title":"On fine-tuning convolutional neural networks for smartphone based ocular recognition","authors":"A. Rattani, R. Derakhshani","doi":"10.1109/BTAS.2017.8272767","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272767","url":null,"abstract":"Recent reported advances in smartphone based ocular biometric recognition in visible spectrum demonstrated the efficacy of deep-learning schemes. In this paper, we evaluate convolutional neural networks (CNNs) pretrained for large scale object recognition, namely VGG-16, VGG-19, InceptionNet and ResNet, and fine-tuned for ocular recognition using RGB images captured by smartphones. Fine-tuning pretrained CNN models is advantageous in case of insufficient training data, and the partial training is faster compared to custom CNN trained from scratch. Experiments on VISOB dataset yielded TPR of up to 100% at FPR of 10−4 using VGG-16 model fine-tuned for ocular recognition.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"46 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120875392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Fingerprint presentation attacks detection based on the user-specific effect 基于用户效果的指纹呈现攻击检测
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272717
Luca Ghiani, G. Marcialis, F. Roli
The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.
迄今为止,在设计用于检测指纹表示攻击的特征空间中,从未考虑到同一指纹的不同采集之间的相似性。实际上,这种相似性的存在只是在最近的一项研究中才被证明,作者在研究中描述了他们所谓的“用户特定效应”。在本文中,我们首次尝试利用这一点来提高FPAD系统的性能。特别是,我们设想了一个三比特的二进制代码,旨在“检测”这种效果。与按照标准协议训练的分类器相结合,例如,在LivDet竞赛中,与单独使用“通用用户”分类器相比,这种方法使我们能够获得更好的准确性。
{"title":"Fingerprint presentation attacks detection based on the user-specific effect","authors":"Luca Ghiani, G. Marcialis, F. Roli","doi":"10.1109/BTAS.2017.8272717","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272717","url":null,"abstract":"The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"47 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122193436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deep features-based expression-invariant tied factor analysis for emotion recognition 基于深度特征的表情不变关联因子分析在情感识别中的应用
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272741
Sarasi Munasinghe, C. Fookes, S. Sridharan
Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.
基于视频的面部表情识别是一个开放的研究挑战,目前最先进的技术尚未解决。另一方面,静态图像的情感识别是非常重要的,当视频不可用,人类的情绪只需要从一个镜头确定。本文提出了基于序列和基于图像的结合因子分析框架与深度网络,同时解决了这两个问题。对于基于视频的数据,我们首先从图像序列中提取深度卷积时间外观特征,然后将这些特征输入到生成模型中,该模型根据面部表情序列为所有个体构建低维观察空间。在学习了表情流形之间转移矩阵的序列表达分量之后,我们使用高斯概率方法设计了一个高效的人脸表情识别分类器。此外,我们分析了所提出的基于视频的方法在基于图像的情感识别学习静态捆绑因子分析参数方面的效用。同时,该模型可用于预测给定中性人脸的表情图像序列。在三个公共基准数据库(CK+、JAFFE和FER2013)上取得的识别结果清楚地表明,我们的方法比当前处理顺序和静态面部表情变化的技术取得了更有效的性能。
{"title":"Deep features-based expression-invariant tied factor analysis for emotion recognition","authors":"Sarasi Munasinghe, C. Fookes, S. Sridharan","doi":"10.1109/BTAS.2017.8272741","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272741","url":null,"abstract":"Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Synthetic iris presentation attack using iDCGAN 基于iDCGAN的合成虹膜呈现攻击
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272756
Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore
Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.
虹膜生物识别技术的可靠性和准确性促使其在边境控制和国家身份证项目等关键应用中得到大规模部署。虹膜识别系统的广泛发展引起了人们对这些系统易受各种攻击的担忧。在过去,研究人员已经研究了各种虹膜呈现攻击的影响,如纹理隐形眼镜和打印攻击。在这项研究中,我们提出了一种基于深度学习的合成虹膜生成的新型表示攻击。利用深度卷积生成对抗网络和虹膜质量度量的生成能力,我们提出了一个新的框架,称为iDCGAN(虹膜深度卷积生成对抗网络),用于生成逼真的合成虹膜图像。我们利用商业系统演示了这些合成的虹膜图像作为呈现攻击对虹膜识别的影响。使用最先进的呈现攻击检测框架DESIST来分析它是否可以将这些合成的虹膜图像与真实图像区分开来。实验结果表明,减轻所提出的合成表示攻击是至关重要的。
{"title":"Synthetic iris presentation attack using iDCGAN","authors":"Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore","doi":"10.1109/BTAS.2017.8272756","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272756","url":null,"abstract":"Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115286605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
期刊
2017 IEEE International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1