Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272770
Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein
This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.
{"title":"Linking face images captured from the optical phenomenon in the wild for forensic science","authors":"Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein","doi":"10.1109/BTAS.2017.8272770","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272770","url":null,"abstract":"This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123653686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272699
Dehua Song, Jufu Feng
The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.
{"title":"Fingerprint indexing based on pyramid deep convolutional feature","authors":"Dehua Song, Jufu Feng","doi":"10.1109/BTAS.2017.8272699","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272699","url":null,"abstract":"The ridges of fingerprint contain enormous discriminative information for fingerprint indexing, however it is hard to depict the structure of ridges for rule-based methods because of nonlinear distortion. This paper investigates to represent the structure of ridges by Deep Convolutional Neural Network (DCNN). The indexing approach partitions the fingerprint image into increasing fine sub-region and extracts feature from each sub-region by DCNN, forming pyramid deep convolutional feature, to represent the global patterns and local details (especially minutiae). Extensive experimental results show that the proposed method achieves better performance on accuracy and efficiency than other prominent indexing approaches. Finally, occlusion sensitivity, visualization and fingerprint reconstruction techniques are employed to explore which attributes of ridges are described in deep convolutional feature.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129356256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272763
David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan
Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.
{"title":"LivDet iris 2017 — Iris liveness detection competition 2017","authors":"David Yambay, Benedict Becker, Naman Kohli, Daksha Yadav, A. Czajka, K. Bowyer, S. Schuckers, Richa Singh, Mayank Vatsa, A. Noore, Diego Gragnaniello, Carlo Sansone, L. Verdoliva, Lingxiao He, Yiwei Ru, Haiqing Li, Nianfeng Liu, Zhenan Sun, T. Tan","doi":"10.1109/BTAS.2017.8272763","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272763","url":null,"abstract":"Presentation attacks such as using a contact lens with a printed pattern or printouts of an iris can be utilized to bypass a biometric security system. The first international iris liveness competition was launched in 2013 in order to assess the performance of presentation attack detection (PAD) algorithms, with a second competition in 2015. This paper presents results of the third competition, LivDet-Iris 2017. Three software-based approaches to Presentation Attack Detection were submitted. Four datasets of live and spoof images were tested with an additional cross-sensor test. New datasets and novel situations of data have resulted in this competition being of a higher difficulty than previous competitions. Anonymous received the best results with a rate of rejected live samples of 3.36% and rate of accepted spoof samples of 14.71%. The results show that even with advances, printed iris attacks as well as patterned contacts lenses are still difficult for software-based systems to detect. Printed iris images were easier to be differentiated from live images in comparison to patterned contact lenses as was also seen in previous competitions.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272742
Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch
The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.
{"title":"Face morphing versus face averaging: Vulnerability and detection","authors":"Ramachandra Raghavendra, K. Raja, S. Venkatesh, C. Busch","doi":"10.1109/BTAS.2017.8272742","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272742","url":null,"abstract":"The Face Recognition System (FRS) is known to be vulnerable to the attacks using the morphed face. As the use of face characteristics are mandatory in the electronic passport (ePass), morphing attacks have raised the potential concerns in the border security. In this paper, we analyze the vulnerability of the FRS to the new attack performed using the averaged face. The averaged face is generated by simple pixel level averaging of two face images corresponding to two different subjects. We benchmark the vulnerability of the commercial FRS to both conventional morphing and averaging based face attacks. We further propose a novel algorithm based on the collaborative representation of the micro-texture features that are extracted from the colour space to reliably detect both morphed and averaged face attacks on the FRS. Extensive experiments are carried out on the newly constructed morphed and averaged face image database with 163 subjects. The database is built by considering the real-life scenario of the passport issuance that typically accepts the printed passport photo from the applicant that is further scanned and stored in the ePass. Thus, the newly constructed database is built to have the print-scanned bonafide, morphed and averaged face samples. The obtained results have demonstrated the improved performance of the proposed scheme on print-scanned morphed and averaged face database.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128125054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272752
Xuesong Niu, Hu Han, S. Shan, Xilin Chen
Non-contact heart rate (HR) measurement via remote photoplethysmography (rPPG) has drawn increasing attention. While a number of methods have been reported, most of them did not take into account the continuous HR measurement problem, which is more challenging due to limited observed video frames and the requirement of speed. In this paper, we present a real-time rPPG method for continuous HR measurement from face videos. We use a multi-patch ROI strategy to remove outlier signals. Chrominance feature is then generated from each ROI to reduce the color channel magnitude differences, which is followed by temporal filtering to suppress the artifacts. In addition, considering the temporal relationship of neighboring HR rhythms, we learn a HR distribution based on historical HR measurements, and apply it to the succeeding HR estimations. Experiment results on the public-domain MAHNOB-HCI database and user tests with commodity webcams show the effectiveness of the proposed approach.
{"title":"Continuous heart rate measurement from face: A robust rPPG approach with distribution learning","authors":"Xuesong Niu, Hu Han, S. Shan, Xilin Chen","doi":"10.1109/BTAS.2017.8272752","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272752","url":null,"abstract":"Non-contact heart rate (HR) measurement via remote photoplethysmography (rPPG) has drawn increasing attention. While a number of methods have been reported, most of them did not take into account the continuous HR measurement problem, which is more challenging due to limited observed video frames and the requirement of speed. In this paper, we present a real-time rPPG method for continuous HR measurement from face videos. We use a multi-patch ROI strategy to remove outlier signals. Chrominance feature is then generated from each ROI to reduce the color channel magnitude differences, which is followed by temporal filtering to suppress the artifacts. In addition, considering the temporal relationship of neighboring HR rhythms, we learn a HR distribution based on historical HR measurements, and apply it to the succeeding HR estimations. Experiment results on the public-domain MAHNOB-HCI database and user tests with commodity webcams show the effectiveness of the proposed approach.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129052157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272693
C. Rathgeb, C. Busch
Morphing techniques can be used to create artificial biometric samples, which resemble the biometric information of two (or more) individuals in image and feature domain. If morphed biometric images or templates are infiltrated to a biometric recognition system the subjects contributing to the morphed image will both (or all) be successfully verified against a single enrolled template. Hence, the unique link between individuals and their biometric reference data is annulled. The vulnerability of face and fingerprint recognition systems to such morphing attacks has been assessed in the recent past. In this paper we investigate the feasibility of morphing iris-codes. Two relevant attack scenarios are discussed and a scheme for morphing pairs of iris-codes depending on the expected stability of their bits is proposed. Different iris recognition systems, which accept comparison scores at a recommended Hamming distance of 0.32, are shown to be vulnerable to attacks based on the presented morphing technique.
{"title":"On the feasibility of creating morphed iris-codes","authors":"C. Rathgeb, C. Busch","doi":"10.1109/BTAS.2017.8272693","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272693","url":null,"abstract":"Morphing techniques can be used to create artificial biometric samples, which resemble the biometric information of two (or more) individuals in image and feature domain. If morphed biometric images or templates are infiltrated to a biometric recognition system the subjects contributing to the morphed image will both (or all) be successfully verified against a single enrolled template. Hence, the unique link between individuals and their biometric reference data is annulled. The vulnerability of face and fingerprint recognition systems to such morphing attacks has been assessed in the recent past. In this paper we investigate the feasibility of morphing iris-codes. Two relevant attack scenarios are discussed and a scheme for morphing pairs of iris-codes depending on the expected stability of their bits is proposed. Different iris recognition systems, which accept comparison scores at a recommended Hamming distance of 0.32, are shown to be vulnerable to attacks based on the presented morphing technique.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116958277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272767
A. Rattani, R. Derakhshani
Recent reported advances in smartphone based ocular biometric recognition in visible spectrum demonstrated the efficacy of deep-learning schemes. In this paper, we evaluate convolutional neural networks (CNNs) pretrained for large scale object recognition, namely VGG-16, VGG-19, InceptionNet and ResNet, and fine-tuned for ocular recognition using RGB images captured by smartphones. Fine-tuning pretrained CNN models is advantageous in case of insufficient training data, and the partial training is faster compared to custom CNN trained from scratch. Experiments on VISOB dataset yielded TPR of up to 100% at FPR of 10−4 using VGG-16 model fine-tuned for ocular recognition.
{"title":"On fine-tuning convolutional neural networks for smartphone based ocular recognition","authors":"A. Rattani, R. Derakhshani","doi":"10.1109/BTAS.2017.8272767","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272767","url":null,"abstract":"Recent reported advances in smartphone based ocular biometric recognition in visible spectrum demonstrated the efficacy of deep-learning schemes. In this paper, we evaluate convolutional neural networks (CNNs) pretrained for large scale object recognition, namely VGG-16, VGG-19, InceptionNet and ResNet, and fine-tuned for ocular recognition using RGB images captured by smartphones. Fine-tuning pretrained CNN models is advantageous in case of insufficient training data, and the partial training is faster compared to custom CNN trained from scratch. Experiments on VISOB dataset yielded TPR of up to 100% at FPR of 10−4 using VGG-16 model fine-tuned for ocular recognition.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"46 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120875392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272717
Luca Ghiani, G. Marcialis, F. Roli
The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.
{"title":"Fingerprint presentation attacks detection based on the user-specific effect","authors":"Luca Ghiani, G. Marcialis, F. Roli","doi":"10.1109/BTAS.2017.8272717","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272717","url":null,"abstract":"The similarities among different acquisitions of the same fingerprint have never been taken into account, so far, in the feature space designed to detect fingerprint presentation attacks. Actually, the existence of such resemblances has only been shown in a recent work where the authors have been able to describe what they called the “user-specific effect”. We present in this paper a first attempt to take advantage of this in order to improve the performance of a FPAD system. In particular, we conceived a binary code of three bits aimed to “detect” such effect. Coupled with a classifier trained according to the standard protocol followed, for example, in the LivDet competition, this approach allowed us to get a better accuracy compared to that obtained with the “generic users” classifier alone.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"47 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122193436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272741
Sarasi Munasinghe, C. Fookes, S. Sridharan
Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.
{"title":"Deep features-based expression-invariant tied factor analysis for emotion recognition","authors":"Sarasi Munasinghe, C. Fookes, S. Sridharan","doi":"10.1109/BTAS.2017.8272741","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272741","url":null,"abstract":"Video-based facial expression recognition is an open research challenge not solved by the current state-of-the-art. On the other hand, static image based emotion recognition is highly important when videos are not available and human emotions need to be determined from a single shot only. This paper proposes sequential-based and image-based tied factor analysis frameworks with a deep network that simultaneously addresses these two problems. For video-based data, we first extract deep convolutional temporal appearance features from image sequences and then these features are fed into a generative model that constructs a low-dimensional observed space for all individuals, depending on the facial expression sequences. After learning the sequential expression components of the transition matrices among the expression manifolds, we use a Gaussian probabilistic approach to design an efficient classifier for temporal facial expression recognition. Furthermore, we analyse the utility of proposed video-based methods for image-based emotion recognition learning static tied factor analysis parameters. Meanwhile, this model can be used to predict the expressive face image sequences from given neutral faces. Recognition results achieved on three public benchmark databases: CK+, JAFFE, and FER2013, clearly indicate our approach achieves effective performance over the current techniques of handling sequential and static facial expression variations.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272756
Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore
Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.
{"title":"Synthetic iris presentation attack using iDCGAN","authors":"Naman Kohli, Daksha Yadav, Mayank Vatsa, Richa Singh, A. Noore","doi":"10.1109/BTAS.2017.8272756","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272756","url":null,"abstract":"Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep con-volutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115286605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}