首页 > 最新文献

2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Face morphing versus face averaging: Vulnerability and detection 面部变形与面部平均:脆弱性与检测
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272742
Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch
The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.
众所周知,人脸识别系统(FRS)很容易受到使用变形脸的攻击。由于电子护照(ePass)必须使用人脸特征,变形攻击引起了边境安全的潜在担忧。在本文中,我们分析了使用平均面进行的新攻击的FRS的脆弱性。平均人脸是通过对对应于两个不同受试者的两张人脸图像进行简单的像素级平均生成的。我们对商用人脸识别系统在传统的变形攻击和基于平均的人脸攻击下的脆弱性进行了基准测试。我们进一步提出了一种基于从颜色空间中提取的微纹理特征的协同表示的新算法,以可靠地检测对FRS的变形和平均人脸攻击。数据库是通过考虑护照签发的实际场景来构建的,该场景通常接受申请人打印的护照照片,该照片被进一步扫描并存储在ePass中。因此,新构建的数据库包含打印扫描的真实,变形和平均面部样本。实验结果表明,该方法在打印扫描的变形和平均人脸数据库上具有较好的性能。
{"title":"Face morphing versus face averaging: Vulnerability and detection","authors":"Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch","doi":"10.1109/BTAS.2017.8272742","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272742","url":null,"abstract":"The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128125054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Multi-iris indexing and retrieval: Fusion strategies for bloom filter-based search structures 多虹膜索引与检索:基于布隆过滤器的搜索结构融合策略
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272681
P. Drozdowski, C. Rathgeb, C. Busch
We present a multi-iris indexing system for efficient and accurate large-scale identification. The system is based on Bloom filters and binary search trees. We describe and empirically evaluate several possible information fusion strategies for the system. Those experiments are performed using a combination of several publicly available datasets; the proposed system is tested in an open-set identification scenario consisting of 6,000 genuine and 100,000 impostor transactions. The system maintains the near-optimal biometric performance of an iris-code, score fusion based baseline system, while reducing the required lookup workload to less than 1% thereof.
提出了一种高效、准确的多虹膜标引系统。该系统基于布隆过滤器和二叉搜索树。我们描述和经验评估了系统的几种可能的信息融合策略。这些实验是使用几个公开可用数据集的组合进行的;提议的系统在一个由6000个真实交易和10万个冒名交易组成的开放式识别场景中进行了测试。该系统保持了虹膜编码、基于评分融合的基线系统的近乎最佳的生物识别性能,同时将所需的查找工作量减少到不到1%。
{"title":"Multi-iris indexing and retrieval: Fusion strategies for bloom filter-based search structures","authors":"P. Drozdowski, C. Rathgeb, C. Busch","doi":"10.1109/BTAS.2017.8272681","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272681","url":null,"abstract":"We present a multi-iris indexing system for efficient and accurate large-scale identification. The system is based on Bloom filters and binary search trees. We describe and empirically evaluate several possible information fusion strategies for the system. Those experiments are performed using a combination of several publicly available datasets; the proposed system is tested in an open-set identification scenario consisting of 6,000 genuine and 100,000 impostor transactions. The system maintains the near-optimal biometric performance of an iris-code, score fusion based baseline system, while reducing the required lookup workload to less than 1% thereof.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126644534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Synthetic iris presentation attack using iDCGAN 基于iDCGAN的合成虹膜呈现攻击
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272756
Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore
Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.
虹膜生物识别技术的可靠性和准确性促使其在边境控制和国家身份证项目等关键应用中得到大规模部署。虹膜识别系统的广泛发展引起了人们对这些系统易受各种攻击的担忧。在过去,研究人员已经研究了各种虹膜呈现攻击的影响,如纹理隐形眼镜和打印攻击。在这项研究中,我们提出了一种基于深度学习的合成虹膜生成的新型表示攻击。利用深度卷积生成对抗网络和虹膜质量度量的生成能力,我们提出了一个新的框架,称为iDCGAN(虹膜深度卷积生成对抗网络),用于生成逼真的合成虹膜图像。我们利用商业系统演示了这些合成的虹膜图像作为呈现攻击对虹膜识别的影响。使用最先进的呈现攻击检测框架DESIST来分析它是否可以将这些合成的虹膜图像与真实图像区分开来。实验结果表明,减轻所提出的合成表示攻击是至关重要的。
{"title":"Synthetic iris presentation attack using iDCGAN","authors":"Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore","doi":"10.1109/BTAS.2017.8272756","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272756","url":null,"abstract":"Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115286605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Unconstrained visible spectrum iris with textured contact lens variations: Database and benchmarking 无约束可见光谱虹膜与纹理隐形眼镜的变化:数据库和基准
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272744
Daksha Yadav, Naman Kohli, Mayank Vatsa, Richa Singh, A. Noore
Iris recognition in visible spectrum has developed into an active area of research. This has elevated the importance of efficient presentation attack detection algorithms, particularly in security based critical applications. In this paper, we present the first detailed analysis of the effect of textured contact lenses on iris recognition in visible spectrum. We introduce the first contact lens database in visible spectrum, Unconstrained Visible Contact Lens Iris (UVCLI) Database, containing samples from 70 classes with subjects wearing textured contact lenses in indoor and outdoor environments across multiple sessions. We observe that textured contact lenses degrade the visible spectrum iris recognition performance by over 25% and thus, may be utilized intentionally or unintentionally to attack existing iris recognition systems. Next, three iris presentation attack detection (PAD) algorithms are evaluated on the proposed database and highest PAD accuracy of 82.85%c is observed. This illustrates that there is a significant scope of improvement in developing efficient PAD algorithms for detection of textured contact lenses in unconstrained visible spectrum iris images.
可见光谱虹膜识别已成为研究的热点。这提高了高效表示攻击检测算法的重要性,特别是在基于安全性的关键应用程序中。本文首次详细分析了纹理隐形眼镜在可见光谱下对虹膜识别的影响。我们推出了第一个可见光谱的隐形眼镜数据库,Unconstrained visible contact lens Iris (UVCLI)数据库,包含来自70个类别的样本,受试者在室内和室外环境中多次佩戴纹理隐形眼镜。我们观察到,纹理隐形眼镜使可见光谱虹膜识别性能降低了25%以上,因此可能有意或无意地被用来攻击现有的虹膜识别系统。接下来,在本文提出的数据库上对三种虹膜呈现攻击检测(PAD)算法进行了评估,PAD的最高准确率为82.85%c。这表明,在开发有效的PAD算法来检测无约束可见光谱虹膜图像中的纹理隐形眼镜方面,有很大的改进空间。
{"title":"Unconstrained visible spectrum iris with textured contact lens variations: Database and benchmarking","authors":"Daksha Yadav, Naman Kohli, Mayank Vatsa, Richa Singh, A. Noore","doi":"10.1109/BTAS.2017.8272744","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272744","url":null,"abstract":"Iris recognition in visible spectrum has developed into an active area of research. This has elevated the importance of efficient presentation attack detection algorithms, particularly in security based critical applications. In this paper, we present the first detailed analysis of the effect of textured contact lenses on iris recognition in visible spectrum. We introduce the first contact lens database in visible spectrum, Unconstrained Visible Contact Lens Iris (UVCLI) Database, containing samples from 70 classes with subjects wearing textured contact lenses in indoor and outdoor environments across multiple sessions. We observe that textured contact lenses degrade the visible spectrum iris recognition performance by over 25% and thus, may be utilized intentionally or unintentionally to attack existing iris recognition systems. Next, three iris presentation attack detection (PAD) algorithms are evaluated on the proposed database and highest PAD accuracy of 82.85%c is observed. This illustrates that there is a significant scope of improvement in developing efficient PAD algorithms for detection of textured contact lenses in unconstrained visible spectrum iris images.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116300528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
LivDet iris 2017 — Iris liveness detection competition 2017 LivDet虹膜2017 -虹膜活性检测大赛2017
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272763
David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan
Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.
展示攻击,例如使用带有打印图案的隐形眼镜或虹膜的打印输出,可以用来绕过生物识别安全系统。首届国际虹膜活性竞赛于2013年启动,旨在评估呈现攻击检测(PAD)算法的性能,第二届竞赛于2015年举行。本文介绍了第三届竞赛livet - iris 2017的结果。提出了三种基于软件的表示攻击检测方法。通过额外的交叉传感器测试测试了四个实时和欺骗图像数据集。新的数据集和新的数据情况导致本次比赛比以往的比赛具有更高的难度。匿名的效果最好,活体样本的拒绝率为3.36%,欺骗样本的接受率为14.71%。结果表明,即使有了进步,打印虹膜攻击和图案隐形眼镜仍然很难被基于软件的系统检测到。与图案隐形眼镜相比,打印的虹膜图像更容易与实时图像区分开来,这在之前的比赛中也可以看到。
{"title":"LivDet iris 2017 — Iris liveness detection competition 2017","authors":"David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan","doi":"10.1109/BTAS.2017.8272763","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272763","url":null,"abstract":"Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Fingerprint indexing based on pyramid deep convolutional feature 基于金字塔深度卷积特征的指纹索引
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272699
Dehua Song, Jufu Feng
The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.
指纹脊包含了大量的判别信息,可用于指纹索引,但基于规则的方法由于脊的非线性失真,难以描述脊的结构。本文研究了用深度卷积神经网络(Deep Convolutional Neural Network, DCNN)表示脊状结构的方法。索引方法将指纹图像划分为越来越细的子区域,并通过DCNN从每个子区域提取特征,形成金字塔深度卷积特征,以表示全局模式和局部细节(特别是细枝末节)。大量的实验结果表明,该方法在精度和效率上都优于其他常用的索引方法。最后,利用遮挡敏感性、可视化和指纹重建技术来探索哪些脊属性在深度卷积特征中被描述。
{"title":"Fingerprint indexing based on pyramid deep convolutional feature","authors":"Dehua Song, Jufu Feng","doi":"10.1109/BTAS.2017.8272699","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272699","url":null,"abstract":"The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129356256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Formulae for consistent biometric score level fusion 一致的生物特征评分水平融合公式
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272714
J. Hube
In an operational setting of key practical importance for a biometric application deployment is the ability to set thresholds to meet error rate targets. Consequently there is a need to consider how output scores from multi-modal score-level fusion are defined. We show a method to ensure these fused scores are consistent with a known input score definition. We derive fusion formulae for the case of input scores based on false acceptance rates. We provide examples to highlight implementation issues.
在对生物识别应用程序部署具有关键实际重要性的操作设置中,设置阈值以满足错误率目标的能力。因此,需要考虑如何定义多模态分数级融合的输出分数。我们展示了一种确保这些融合分数与已知输入分数定义一致的方法。我们推导了基于错误接受率的输入分数情况下的融合公式。我们提供示例来突出实现问题。
{"title":"Formulae for consistent biometric score level fusion","authors":"J. Hube","doi":"10.1109/BTAS.2017.8272714","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272714","url":null,"abstract":"In an operational setting of key practical importance for a biometric application deployment is the ability to set thresholds to meet error rate targets. Consequently there is a need to consider how output scores from multi-modal score-level fusion are defined. We show a method to ensure these fused scores are consistent with a known input score definition. We derive fusion formulae for the case of input scores based on false acceptance rates. We provide examples to highlight implementation issues.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129650728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fingerprint presentation attacks detection based on the user-specific effect 基于用户效果的指纹呈现攻击检测
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272717
Luca Ghiani, G. Marcialis, F. Roli
The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.
迄今为止,在设计用于检测指纹表示攻击的特征空间中,从未考虑到同一指纹的不同采集之间的相似性。实际上,这种相似性的存在只是在最近的一项研究中才被证明,作者在研究中描述了他们所谓的“用户特定效应”。在本文中,我们首次尝试利用这一点来提高FPAD系统的性能。特别是,我们设想了一个三比特的二进制代码,旨在“检测”这种效果。与按照标准协议训练的分类器相结合,例如,在LivDet竞赛中,与单独使用“通用用户”分类器相比,这种方法使我们能够获得更好的准确性。
{"title":"Fingerprint presentation attacks detection based on the user-specific effect","authors":"Luca Ghiani, G. Marcialis, F. Roli","doi":"10.1109/BTAS.2017.8272717","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272717","url":null,"abstract":"The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"47 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122193436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Identifying the origin of Iris images based on fusion of local image descriptors and PRNU based techniques 基于局部图像描述符和PRNU融合的虹膜图像起源识别
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272710
Christof Kauba, L. Debiasi, A. Uhl
Being aware of the origin (source sensor) of an iris images offers several advantages. Identifying the specific sensor unit supports ensuring the integrity and authenticity of iris images and thus detecting insertion attacks at a biometric system. Moreover, by knowing the sensor model selective processing, such as image enhancements, becomes feasible. In order to determine the origin (i.e. dataset) of near-infrared (NIR) and visible spectrum iris/ocular images, we evaluate the performance of three different approaches, a photo response non-uniformity (PRNU) based and an image texture feature based one, and the fusion of both. Our first set of experiments includes 19 different datasets comprising different sensors and image resolutions. The second set includes 6 different camera models with 5 instances each. We evaluate the applicability of the three approaches in these test scenarios from a forensic and non-forensic perspective.
知道虹膜图像的来源(源传感器)有几个好处。识别特定的传感器单元支持确保虹膜图像的完整性和真实性,从而检测生物识别系统中的插入攻击。此外,通过了解传感器模型,图像增强等选择性处理变得可行。为了确定近红外(NIR)和可见光谱虹膜/眼部图像的来源(即数据集),我们评估了三种不同方法的性能,即基于光响应不均匀性(PRNU)的方法和基于图像纹理特征的方法,以及两者的融合。我们的第一组实验包括19个不同的数据集,包括不同的传感器和图像分辨率。第二组包括6个不同的相机型号,每个型号有5个实例。我们从法医和非法医的角度评估这三种方法在这些测试场景中的适用性。
{"title":"Identifying the origin of Iris images based on fusion of local image descriptors and PRNU based techniques","authors":"Christof Kauba, L. Debiasi, A. Uhl","doi":"10.1109/BTAS.2017.8272710","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272710","url":null,"abstract":"Being aware of the origin (source sensor) of an iris images offers several advantages. Identifying the specific sensor unit supports ensuring the integrity and authenticity of iris images and thus detecting insertion attacks at a biometric system. Moreover, by knowing the sensor model selective processing, such as image enhancements, becomes feasible. In order to determine the origin (i.e. dataset) of near-infrared (NIR) and visible spectrum iris/ocular images, we evaluate the performance of three different approaches, a photo response non-uniformity (PRNU) based and an image texture feature based one, and the fusion of both. Our first set of experiments includes 19 different datasets comprising different sensors and image resolutions. The second set includes 6 different camera models with 5 instances each. We evaluate the applicability of the three approaches in these test scenarios from a forensic and non-forensic perspective.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124325481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deep features-based expression-invariant tied factor analysis for emotion recognition 基于深度特征的表情不变关联因子分析在情感识别中的应用
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272741
Sarasi Munasinghe, C. Fookes, S. Sridharan
Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.
基于视频的面部表情识别是一个开放的研究挑战,目前最先进的技术尚未解决。另一方面,静态图像的情感识别是非常重要的,当视频不可用,人类的情绪只需要从一个镜头确定。本文提出了基于序列和基于图像的结合因子分析框架与深度网络,同时解决了这两个问题。对于基于视频的数据,我们首先从图像序列中提取深度卷积时间外观特征,然后将这些特征输入到生成模型中,该模型根据面部表情序列为所有个体构建低维观察空间。在学习了表情流形之间转移矩阵的序列表达分量之后,我们使用高斯概率方法设计了一个高效的人脸表情识别分类器。此外,我们分析了所提出的基于视频的方法在基于图像的情感识别学习静态捆绑因子分析参数方面的效用。同时,该模型可用于预测给定中性人脸的表情图像序列。在三个公共基准数据库(CK+、JAFFE和FER2013)上取得的识别结果清楚地表明,我们的方法比当前处理顺序和静态面部表情变化的技术取得了更有效的性能。
{"title":"Deep features-based expression-invariant tied factor analysis for emotion recognition","authors":"Sarasi Munasinghe, C. Fookes, S. Sridharan","doi":"10.1109/BTAS.2017.8272741","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272741","url":null,"abstract":"Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2017 IEEE International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1