Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987316
N. Damer, Alexandra Mosegui Saladie, Steffen Zienert, Yaza Wainakh, Philipp Terhörst, Florian Kirchbuchner, Arjan Kuijper
Recent works have studied the face morphing attack detection performance generalization over variations in morphing approaches, image re-digitization, and image source variations. However, these works assumed a constant approach for selecting the images to be morphed (pairing) across their training and testing data. A realistic variation in the pairing protocol in the training data can result in challenges and opportunities for a stable attack detector. This work extensively study this issue by building a novel database with three different pairing protocols and two different morphing approaches. We study the detection generalization over these variations for single image and differential attack detection, along with handcrafted and CNN-based features. Our observations included that training an attack detection solution on attacks created from dissimilar face images, in contrary to the common practice, can result in an overall more generalized detection performance. Moreover, we found that differential attack detection is very sensitive to variations in morphing and pairing protocols.
{"title":"To Detect or not to Detect: The Right Faces to Morph","authors":"N. Damer, Alexandra Mosegui Saladie, Steffen Zienert, Yaza Wainakh, Philipp Terhörst, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/ICB45273.2019.8987316","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987316","url":null,"abstract":"Recent works have studied the face morphing attack detection performance generalization over variations in morphing approaches, image re-digitization, and image source variations. However, these works assumed a constant approach for selecting the images to be morphed (pairing) across their training and testing data. A realistic variation in the pairing protocol in the training data can result in challenges and opportunities for a stable attack detector. This work extensively study this issue by building a novel database with three different pairing protocols and two different morphing approaches. We study the detection generalization over these variations for single image and differential attack detection, along with handcrafted and CNN-based features. Our observations included that training an attack detection solution on attacks created from dissimilar face images, in contrary to the common practice, can result in an overall more generalized detection performance. Moreover, we found that differential attack detection is very sensitive to variations in morphing and pairing protocols.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129299001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987399
S. Marrone, Carlo Sansone
Fingerprint-based Authentication Systems (FAS) usage is increasing over the last years thanks to the growing availability of cheap and reliable scanners. In order to bypass a FAS by using a counterfeit fingerprint, a Presentation Attack (PA) can be used. As a consequence, a liveness detector able to discern authentic from fake biometry becomes almost essential in each FAS. Deep Learning based approaches demonstrated to be very effective against fingerprint presentation attacks, becoming the current state-of-the-art in liveness detection. However, it has been shown that it is possible to arbitrarily cause state-of-the-art CNNs to misclassify an image by applying on it a suitable small peturbation, often even imperceptible to human eyes. The aim of this work is to understand if and to what extent adversarial perturbation can affect FASs, as a preliminary step to develop an adversarial presentation attack. Results show that it is possible to exploit adversarial perturbation to mislead both the FAS liveness detector and the authentication system, by giving rise to images that are even almost imperceptible to human eyes.
{"title":"Adversarial Perturbations Against Fingerprint Based Authentication Systems","authors":"S. Marrone, Carlo Sansone","doi":"10.1109/ICB45273.2019.8987399","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987399","url":null,"abstract":"Fingerprint-based Authentication Systems (FAS) usage is increasing over the last years thanks to the growing availability of cheap and reliable scanners. In order to bypass a FAS by using a counterfeit fingerprint, a Presentation Attack (PA) can be used. As a consequence, a liveness detector able to discern authentic from fake biometry becomes almost essential in each FAS. Deep Learning based approaches demonstrated to be very effective against fingerprint presentation attacks, becoming the current state-of-the-art in liveness detection. However, it has been shown that it is possible to arbitrarily cause state-of-the-art CNNs to misclassify an image by applying on it a suitable small peturbation, often even imperceptible to human eyes. The aim of this work is to understand if and to what extent adversarial perturbation can affect FASs, as a preliminary step to develop an adversarial presentation attack. Results show that it is possible to exploit adversarial perturbation to mislead both the FAS liveness detector and the authentication system, by giving rise to images that are even almost imperceptible to human eyes.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116470824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987297
Richard Plesh, Keivan Bahmani, Ganghee Jang, David Yambay, Ken Brownlee, Timothy Swyka, Peter A. Johnson, A. Ross, S. Schuckers
Fingerprint capture systems can be fooled by widely accessible methods to spoof the system using fake fingers, known as presentation attacks. As biometric recognition systems become more extensively relied upon at international borders and in consumer electronics, presentation attacks are becoming an increasingly serious issue. A robust solution is needed that can handle the increased variability and complexity of spoofing techniques. This paper demonstrates the viability of utilizing a sensor with time-series and color-sensing capabilities to improve the robustness of a traditional fingerprint sensor and introduces a comprehensive fingerprint dataset with over 36,000 image sequences and a state-of-the-art set of spoofing techniques. The specific sensor used in this research captures a traditional gray-scale static capture and a time-series color capture simultaneously. Two different methods for Presentation Attack Detection (PAD) are used to assess the benefit of a color dynamic capture. The first algorithm utilizes Static-Temporal Feature Engineering on the fingerprint capture to generate a classification decision. The second generates its classification decision using features extracted by way of the Inception V3 CNN trained on ImageNet. Classification performance is evaluated using features extracted exclusively from the static capture, exclusively from the dynamic capture, and on a fusion of the two feature sets. With both PAD approaches we find that the fusion of the dynamic and static feature-set is shown to improve performance to a level not individually achievable.
{"title":"Fingerprint Presentation Attack Detection utilizing Time-Series, Color Fingerprint Captures","authors":"Richard Plesh, Keivan Bahmani, Ganghee Jang, David Yambay, Ken Brownlee, Timothy Swyka, Peter A. Johnson, A. Ross, S. Schuckers","doi":"10.1109/ICB45273.2019.8987297","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987297","url":null,"abstract":"Fingerprint capture systems can be fooled by widely accessible methods to spoof the system using fake fingers, known as presentation attacks. As biometric recognition systems become more extensively relied upon at international borders and in consumer electronics, presentation attacks are becoming an increasingly serious issue. A robust solution is needed that can handle the increased variability and complexity of spoofing techniques. This paper demonstrates the viability of utilizing a sensor with time-series and color-sensing capabilities to improve the robustness of a traditional fingerprint sensor and introduces a comprehensive fingerprint dataset with over 36,000 image sequences and a state-of-the-art set of spoofing techniques. The specific sensor used in this research captures a traditional gray-scale static capture and a time-series color capture simultaneously. Two different methods for Presentation Attack Detection (PAD) are used to assess the benefit of a color dynamic capture. The first algorithm utilizes Static-Temporal Feature Engineering on the fingerprint capture to generate a classification decision. The second generates its classification decision using features extracted by way of the Inception V3 CNN trained on ImageNet. Classification performance is evaluated using features extracted exclusively from the static capture, exclusively from the dynamic capture, and on a fusion of the two feature sets. With both PAD approaches we find that the fusion of the dynamic and static feature-set is shown to improve performance to a level not individually achievable.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114962220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987407
T. Neal, D. Woodard
The rise of mobile devices has contributed new biometric modalities which reflect behavioral tendencies as users interact with the device’s services. In this paper, we explore replay attacks against such systems and how a remote attack might affect authentication performance. There are few efforts that focus on replay attacks in mobile biometric systems, and none to our knowledge related to user-device interactions, such as the use of mobile apps. Instead, previous efforts have mainly considered spoofing attacks, which implicate that the attacker has learned their target’s behavior instead of obtaining a direct copy of logged behavior by theft. Here, we explore temporally-derived replay attacks that assume that application, Bluetooth, and Wi-Fi data has been captured remotely and then intelligently combined with some level of noise to avoid the replay of an exact copy of legitimate data. We study several factors that may affect replay attack detection, including the effects of varying the amount of data available during data collection, the number of samples used for training, and supervised and unsupervised learning on attack detection. In our analysis, false positive rates increased from 2.3% when using zero-effort attacks to over 40% as a result of replay attacks. However, our results also show that by contextualizing behavior in the feature representation, false positive rates decrease by over 25%.
{"title":"Mobile Biometrics, Replay Attacks, and Behavior Profiling: An Empirical Analysis of Impostor Detection","authors":"T. Neal, D. Woodard","doi":"10.1109/ICB45273.2019.8987407","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987407","url":null,"abstract":"The rise of mobile devices has contributed new biometric modalities which reflect behavioral tendencies as users interact with the device’s services. In this paper, we explore replay attacks against such systems and how a remote attack might affect authentication performance. There are few efforts that focus on replay attacks in mobile biometric systems, and none to our knowledge related to user-device interactions, such as the use of mobile apps. Instead, previous efforts have mainly considered spoofing attacks, which implicate that the attacker has learned their target’s behavior instead of obtaining a direct copy of logged behavior by theft. Here, we explore temporally-derived replay attacks that assume that application, Bluetooth, and Wi-Fi data has been captured remotely and then intelligently combined with some level of noise to avoid the replay of an exact copy of legitimate data. We study several factors that may affect replay attack detection, including the effects of varying the amount of data available during data collection, the number of samples used for training, and supervised and unsupervised learning on attack detection. In our analysis, false positive rates increased from 2.3% when using zero-effort attacks to over 40% as a result of replay attacks. However, our results also show that by contextualizing behavior in the feature representation, false positive rates decrease by over 25%.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987243
Yanqing Guo, Qianyu Wang, Huaibo Huang, Xin Zheng, Zhaofeng He
Low resolution iris images often degrade iris recognition performance due to the lack of enough texture details. This paper proposes an adversarial iris super resolution method using a densely connected convolutional network and the adversarial learning, namely IrisDNet. The densely connected network is employed for maximum information flow between layers to achieve high iris texture reconstruction performance. An adversarial network is further incorporated into the densely connected network to sharpen texture details of iris. Moreover, for the identity persistence, we employ a pretrained network to compute an identity preserving loss to achieve semantic preserved patterns. Extensive experiments of super resolution and iris verification on multiple upscaling factors demonstrate that the proposed method achieves pleasing results with abundant high-frequency textures while maintaining identity information.
{"title":"Adversarial Iris Super Resolution","authors":"Yanqing Guo, Qianyu Wang, Huaibo Huang, Xin Zheng, Zhaofeng He","doi":"10.1109/ICB45273.2019.8987243","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987243","url":null,"abstract":"Low resolution iris images often degrade iris recognition performance due to the lack of enough texture details. This paper proposes an adversarial iris super resolution method using a densely connected convolutional network and the adversarial learning, namely IrisDNet. The densely connected network is employed for maximum information flow between layers to achieve high iris texture reconstruction performance. An adversarial network is further incorporated into the densely connected network to sharpen texture details of iris. Moreover, for the identity persistence, we employ a pretrained network to compute an identity preserving loss to achieve semantic preserved patterns. Extensive experiments of super resolution and iris verification on multiple upscaling factors demonstrate that the proposed method achieves pleasing results with abundant high-frequency textures while maintaining identity information.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131804840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987309
Haibo Jin, Shifeng Zhang, Xiangyu Zhu, Yinhang Tang, Zhen Lei, S. Li
Despite that face detection has progressed significantly in recent years, it is still a challenging task to get a fast face detector with competitive performance, especially on CPU based devices. In this paper, we propose a novel loss function based on knowledge distillation to boost the performance of lightweight face detectors. More specifically, a student detector learns additional soft label from a teacher detector by mimicking its classification map. To make the knowledge transfer more efficient, a threshold function is designed to assign threshold values adaptively for different objectness scores such that only the informative samples are used for mimicking. Experiments on FDDB and WIDER FACE show that the proposed method improves the performance of face detectors consistently. With the help of the proposed training method, we get a CPU real-time face detector that runs at 20 FPS while being state-of-the-art on performance among CPU based detectors.
{"title":"Learning Lightweight Face Detector with Knowledge Distillation","authors":"Haibo Jin, Shifeng Zhang, Xiangyu Zhu, Yinhang Tang, Zhen Lei, S. Li","doi":"10.1109/ICB45273.2019.8987309","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987309","url":null,"abstract":"Despite that face detection has progressed significantly in recent years, it is still a challenging task to get a fast face detector with competitive performance, especially on CPU based devices. In this paper, we propose a novel loss function based on knowledge distillation to boost the performance of lightweight face detectors. More specifically, a student detector learns additional soft label from a teacher detector by mimicking its classification map. To make the knowledge transfer more efficient, a threshold function is designed to assign threshold values adaptively for different objectness scores such that only the informative samples are used for mimicking. Experiments on FDDB and WIDER FACE show that the proposed method improves the performance of face detectors consistently. With the help of the proposed training method, we get a CPU real-time face detector that runs at 20 FPS while being state-of-the-art on performance among CPU based detectors.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133671913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987326
Soroush Fatemifar, Muhammad Awais, S. R. Arashloo, J. Kittler
One-class spoofing detection approaches have been an effective alternative to the two-class learners in the face presentation attack detection particularly in unseen attack scenarios. We propose an ensemble based anomaly detection approach applicable to one-class classifiers. A new score normalisation method is proposed to normalise the output of individual outlier detectors before fusion. To comply with the accuracy and diversity objectives for the component classifiers, three different strategies are utilised to build a pool of anomaly experts. To boost the performance, we also make use of the client-specific information both in the design of individual experts as well as in setting a distinct threshold for each client. We carry out extensive experiments on three face anti-spoofing datasets and show that the proposed ensemble approaches are comparable superior to the techniques based on the two-class formulation or class-independent settings. *
{"title":"Combining Multiple one-class Classifiers for Anomaly based Face Spoofing Attack Detection","authors":"Soroush Fatemifar, Muhammad Awais, S. R. Arashloo, J. Kittler","doi":"10.1109/ICB45273.2019.8987326","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987326","url":null,"abstract":"One-class spoofing detection approaches have been an effective alternative to the two-class learners in the face presentation attack detection particularly in unseen attack scenarios. We propose an ensemble based anomaly detection approach applicable to one-class classifiers. A new score normalisation method is proposed to normalise the output of individual outlier detectors before fusion. To comply with the accuracy and diversity objectives for the component classifiers, three different strategies are utilised to build a pool of anomaly experts. To boost the performance, we also make use of the client-specific information both in the design of individual experts as well as in setting a distinct threshold for each client. We carry out extensive experiments on three face anti-spoofing datasets and show that the proposed ensemble approaches are comparable superior to the techniques based on the two-class formulation or class-independent settings. *","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115394961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987303
Kevin Hernandez-Diaz, F. Alonso-Fernandez, J. Bigün
Periocular recognition has gained attention in the last years thanks to its high discrimination capabilities in less constraint scenarios than other ocular modalities. In this paper we propose a method for periocular verification under different light spectra using CNN features with the particularity that the network has not been trained for this purpose. We use a ResNet-101 pretrained model for the ImageNet Large Scale Visual Recognition Challenge to extract features from the IIITD Multispectral Periocular Database. At each layer the features are compared using χ2 distance and cosine similitude to carry on verification between images, achieving an improvement in the EER and accuracy at 1% FAR of up to 63.13% and 24.79% in comparison to previous works that employ the same database. In addition to this, we train a neural network to match the best CNN feature layer vector from each spectrum. With this procedure, we achieve improvements of up to 65% (EER) and 87% (accuracy at 1% FAR) in cross-spectral verification with respect to previous studies.
{"title":"Cross Spectral Periocular Matching using ResNet Features","authors":"Kevin Hernandez-Diaz, F. Alonso-Fernandez, J. Bigün","doi":"10.1109/ICB45273.2019.8987303","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987303","url":null,"abstract":"Periocular recognition has gained attention in the last years thanks to its high discrimination capabilities in less constraint scenarios than other ocular modalities. In this paper we propose a method for periocular verification under different light spectra using CNN features with the particularity that the network has not been trained for this purpose. We use a ResNet-101 pretrained model for the ImageNet Large Scale Visual Recognition Challenge to extract features from the IIITD Multispectral Periocular Database. At each layer the features are compared using χ2 distance and cosine similitude to carry on verification between images, achieving an improvement in the EER and accuracy at 1% FAR of up to 63.13% and 24.79% in comparison to previous works that employ the same database. In addition to this, we train a neural network to match the best CNN feature layer vector from each spectrum. With this procedure, we achieve improvements of up to 65% (EER) and 87% (accuracy at 1% FAR) in cross-spectral verification with respect to previous studies.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115598096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987356
Yuhang Liu, Yao Tang, Ruilin Li, Jufu Feng
Robust fingerprint enhancement algorithm is crucial to latent fingerprint recognition. In this paper, a latent fingerprint enhancement model named cooperative orientation generative adversarial network (COOGAN) is proposed. We formulate fingerprint enhancement as an image-to-image translation problem with deep generative adversarial network (GAN) and introduce orientation constraints to it. The deep architecture provides a powerful representation for the translation between latent fingerprint space and enhanced fingerprint space. While the orientation supervision can guide the deep feature learning to focus more on the ridge flows. To further boost the performance, a quality estimation module is proposed to remove the unrecoverable regions while enhancement. Experimental results show that COOGAN achieves state-of-the-art performance on NIST SD27 latent fingerprint database.
{"title":"Cooperative Orientation Generative Adversarial Network for Latent Fingerprint Enhancement","authors":"Yuhang Liu, Yao Tang, Ruilin Li, Jufu Feng","doi":"10.1109/ICB45273.2019.8987356","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987356","url":null,"abstract":"Robust fingerprint enhancement algorithm is crucial to latent fingerprint recognition. In this paper, a latent fingerprint enhancement model named cooperative orientation generative adversarial network (COOGAN) is proposed. We formulate fingerprint enhancement as an image-to-image translation problem with deep generative adversarial network (GAN) and introduce orientation constraints to it. The deep architecture provides a powerful representation for the translation between latent fingerprint space and enhanced fingerprint space. While the orientation supervision can guide the deep feature learning to focus more on the ridge flows. To further boost the performance, a quality estimation module is proposed to remove the unrecoverable regions while enhancement. Experimental results show that COOGAN achieves state-of-the-art performance on NIST SD27 latent fingerprint database.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122639174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face recognition systems are vulnerable to presentation attacks such as replay and 3D masks. In the literature, several presentation attack detection (PAD) algorithms are developed to address this problem. However, for the first time in the literature, this paper showcases that it is possible to "fool" the PAD algorithms using adversarial perturbations. The proposed perturbation approach attacks the presentation attack detection algorithms at the PAD feature level via transformation of features from one class (attack class) to another (real class). The PAD feature tampering network utilizes convolutional autoencoder to learn the perturbations. The proposed algorithm is evaluated with respect to CNN and local binary pattern (LBP) based PAD algorithms. Experiments on three databases, Replay, SMAD, and Face Morph, showcase that the proposed approach increases the equal error rate of PAD algorithms by at least two times. For instance, on the SMAD database, PAD equal error rate (EER) of 20.1% is increased to 55.7% after attacking the PAD algorithm.
{"title":"Deceiving the Protector: Fooling Face Presentation Attack Detection Algorithms","authors":"Akshay Agarwal, Akarsha Sehwag, Mayank Vatsa, Richa Singh","doi":"10.1109/ICB45273.2019.8987293","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987293","url":null,"abstract":"Face recognition systems are vulnerable to presentation attacks such as replay and 3D masks. In the literature, several presentation attack detection (PAD) algorithms are developed to address this problem. However, for the first time in the literature, this paper showcases that it is possible to \"fool\" the PAD algorithms using adversarial perturbations. The proposed perturbation approach attacks the presentation attack detection algorithms at the PAD feature level via transformation of features from one class (attack class) to another (real class). The PAD feature tampering network utilizes convolutional autoencoder to learn the perturbations. The proposed algorithm is evaluated with respect to CNN and local binary pattern (LBP) based PAD algorithms. Experiments on three databases, Replay, SMAD, and Face Morph, showcase that the proposed approach increases the equal error rate of PAD algorithms by at least two times. For instance, on the SMAD database, PAD equal error rate (EER) of 20.1% is increased to 55.7% after attacking the PAD algorithm.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115806558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}