Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987266
K. B. Raja, Ramachandra Raghavendra, C. Busch
The strength of iris recognition in terms of optimal biometric performance has been challenged by inevitable operational conditions in unconstrained scenarios. In this work we present a new approach for extracting stable iris weight maps to account for the noisy iris representation as a result of capture conditions and ineluctable segmentation errors. Traditional approaches to extract stable bits often ignore inter-code relations under the presence of multiple enrolment samples. Unlike previous works, we formulate the stable code extraction using tensor representation to exactly recover the low-rank non-noisy iris information using the multiple enrolment samples. Further, the proposed approach produces stable class specific (user specific) iris weight maps by eliminating the error bits due to sub-optimal segmentation or pupil dilation effects using spatial correspondence in a patch-wise manner. Through the set of experiments on two publicly available iris databases acquired under semi-constrained and unconstrained setting, we demonstrate the superiority for identification and verification performance over current state-ofthe-art algorithms. Rank−1 identification rate on CASIAv4 distance database is achieved at 93.3% and a verification accuracy of Genuine Match Rate (GMR) of 80% at False Match Rate(FMR) of 0.0001 indicating the applicability of proposed approach in operational scenarios.
{"title":"Obtaining Stable Iris Codes Exploiting Low-Rank Tensor Space and Spatial Structure Aware Refinement for Better Iris Recognition","authors":"K. B. Raja, Ramachandra Raghavendra, C. Busch","doi":"10.1109/ICB45273.2019.8987266","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987266","url":null,"abstract":"The strength of iris recognition in terms of optimal biometric performance has been challenged by inevitable operational conditions in unconstrained scenarios. In this work we present a new approach for extracting stable iris weight maps to account for the noisy iris representation as a result of capture conditions and ineluctable segmentation errors. Traditional approaches to extract stable bits often ignore inter-code relations under the presence of multiple enrolment samples. Unlike previous works, we formulate the stable code extraction using tensor representation to exactly recover the low-rank non-noisy iris information using the multiple enrolment samples. Further, the proposed approach produces stable class specific (user specific) iris weight maps by eliminating the error bits due to sub-optimal segmentation or pupil dilation effects using spatial correspondence in a patch-wise manner. Through the set of experiments on two publicly available iris databases acquired under semi-constrained and unconstrained setting, we demonstrate the superiority for identification and verification performance over current state-ofthe-art algorithms. Rank−1 identification rate on CASIAv4 distance database is achieved at 93.3% and a verification accuracy of Genuine Match Rate (GMR) of 80% at False Match Rate(FMR) of 0.0001 indicating the applicability of proposed approach in operational scenarios.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132184107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987273
David Yambay, Morgan Johnson, Keivan Bahmani, S. Schuckers
Biometric recognition allows a person to be identified by comparing feature vectors derived from a person’s physiological characteristics. Recognition is dependent on the permanence of the biometric characteristics over long periods of time. There was been limited work evaluating the footprint as a potential biometric. This paper presents a longitudinal study of toe prints in children to understand if this biometric modality could be used reliably as a child grows. Data was collected and analyzed in children ages 4-13 years over five visits, spaced approximately six months apart, giving two years of data. This is the first footprint collection spanning this broad age range in children. Footprints were segmented into separate toe prints to examine whether current fingerprint recognition technology can provide accurate results on toe prints. Data was analyzed using two available fingerprint matchers, Verifinger and Bozorth3 from NIST Biometric Image Software (NBIS). Verifinger provides the best verification match scores using the toe prints, especially when using the hallux, the large toe. The hallux toe on Verifinger provides verification rates of 0% FAR and FRR for images collected on the same day and a FRR of 6.44% at a 1% FAR after two years have passed between collections. Additional longitudinal data is being collected to further these results.
{"title":"A Feasibility Study on Utilizing Toe Prints for Biometric Verification of Children","authors":"David Yambay, Morgan Johnson, Keivan Bahmani, S. Schuckers","doi":"10.1109/ICB45273.2019.8987273","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987273","url":null,"abstract":"Biometric recognition allows a person to be identified by comparing feature vectors derived from a person’s physiological characteristics. Recognition is dependent on the permanence of the biometric characteristics over long periods of time. There was been limited work evaluating the footprint as a potential biometric. This paper presents a longitudinal study of toe prints in children to understand if this biometric modality could be used reliably as a child grows. Data was collected and analyzed in children ages 4-13 years over five visits, spaced approximately six months apart, giving two years of data. This is the first footprint collection spanning this broad age range in children. Footprints were segmented into separate toe prints to examine whether current fingerprint recognition technology can provide accurate results on toe prints. Data was analyzed using two available fingerprint matchers, Verifinger and Bozorth3 from NIST Biometric Image Software (NBIS). Verifinger provides the best verification match scores using the toe prints, especially when using the hallux, the large toe. The hallux toe on Verifinger provides verification rates of 0% FAR and FRR for images collected on the same day and a FRR of 6.44% at a 1% FAR after two years have passed between collections. Additional longitudinal data is being collected to further these results.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130131627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987338
Yang Yang, Zhen Lei, Jinqiao Wang, S. Li
In this paper, we propose an efficient image representation strategy for addressing the task of small-scale person re-identification. Taking advantages of being compact and intuitively understandable, we adopt color names descriptor (CND) as our color feature. To solve the inaccuracy by comparing color names with image pixels in Euclidean space, we propose a new approach – soft Gaussian mapping (SGM), which uses a Gaussian model to bridge their semantic gap. We further present a cross-view coupling learning method to build a common subspace where the learned features can contain the transition information among different cameras. Experiments on the challenging small-scale benchmark public datasets demonstrate the effectiveness of our proposed method.
{"title":"In Defense of Color Names for Small-Scale Person Re-Identification","authors":"Yang Yang, Zhen Lei, Jinqiao Wang, S. Li","doi":"10.1109/ICB45273.2019.8987338","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987338","url":null,"abstract":"In this paper, we propose an efficient image representation strategy for addressing the task of small-scale person re-identification. Taking advantages of being compact and intuitively understandable, we adopt color names descriptor (CND) as our color feature. To solve the inaccuracy by comparing color names with image pixels in Euclidean space, we propose a new approach – soft Gaussian mapping (SGM), which uses a Gaussian model to bridge their semantic gap. We further present a cross-view coupling learning method to build a common subspace where the learned features can contain the transition information among different cameras. Experiments on the challenging small-scale benchmark public datasets demonstrate the effectiveness of our proposed method.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115568027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987414
Abhijit Das, U. Pal, M. Blumenstein, Caiyong Wang, Yong He, Yuhao Zhu, Zhenan Sun
This paper summarizes the results of the Sclera Segmentation Benchmarking Competition (SSBC 2019). It was organized in the context of the 12th IAPR International Conference on Biometrics (ICB 2019). The aim of this competition was to record the developments on sclera segmentation in the cross-resolution environment (sclera trait captured using multiple acquiring sensors with different image resolutions). Additionally, the competition also aimed to gain the attention of researchers on this subject of research.For the purpose of benchmarking, we have employed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1). The second dataset is the Mobile Sclera Dataset (MSD), and in this dataset, images were captured using .a mobile phone rear camera of 8-megapixels. Baseline manual segmentation masks of the sclera images from both the datasets were developed.Precision and recall-based measures were employed to evaluate the effectiveness and ranking of the submitted segmentation techniques. Four algorithms were submitted to address the segmentation task. In this paper we analyzed the results produced by these algorithms/systems, and we have defined a way forward for this problem. Both the datasets along with some of the accompanying ground truth/baseline masks will be freely available for research purposes.
{"title":"Sclera Segmentation Benchmarking Competition in Cross-resolution Environment","authors":"Abhijit Das, U. Pal, M. Blumenstein, Caiyong Wang, Yong He, Yuhao Zhu, Zhenan Sun","doi":"10.1109/ICB45273.2019.8987414","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987414","url":null,"abstract":"This paper summarizes the results of the Sclera Segmentation Benchmarking Competition (SSBC 2019). It was organized in the context of the 12th IAPR International Conference on Biometrics (ICB 2019). The aim of this competition was to record the developments on sclera segmentation in the cross-resolution environment (sclera trait captured using multiple acquiring sensors with different image resolutions). Additionally, the competition also aimed to gain the attention of researchers on this subject of research.For the purpose of benchmarking, we have employed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1). The second dataset is the Mobile Sclera Dataset (MSD), and in this dataset, images were captured using .a mobile phone rear camera of 8-megapixels. Baseline manual segmentation masks of the sclera images from both the datasets were developed.Precision and recall-based measures were employed to evaluate the effectiveness and ranking of the submitted segmentation techniques. Four algorithms were submitted to address the segmentation task. In this paper we analyzed the results produced by these algorithms/systems, and we have defined a way forward for this problem. Both the datasets along with some of the accompanying ground truth/baseline masks will be freely available for research purposes.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114905458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-28DOI: 10.1109/ICB45273.2019.8987341
Wojciech Michal Matkowski, Krzysztof Matkowski, A. Kong, C. Hall
In digital and multimedia forensics, identification of child sexual offenders based on digital evidence images is highly challenging due to the fact that the offender’s face or other obvious characteristics such as tattoos are occluded, covered, or not visible at all. Nevertheless, other naked body parts, e.g., chest are still visible. Some researchers proposed skin marks, skin texture, vein or androgenic hair patterns for criminal and victim identification. There are no available studies of nipple-areola complex (NAC) for offender identification. In this paper, we present a study of offender identification based on the NAC, and we present NTU-Nipple-v1 dataset, which contains 2732 images of 428 different male nipple-areolae. Popular deep learning and hand-crafted recognition methods are evaluated on the provided dataset. The results indicate that the NAC can be a useful characteristic for offender identification.
{"title":"The Nipple-Areola Complex for Criminal Identification","authors":"Wojciech Michal Matkowski, Krzysztof Matkowski, A. Kong, C. Hall","doi":"10.1109/ICB45273.2019.8987341","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987341","url":null,"abstract":"In digital and multimedia forensics, identification of child sexual offenders based on digital evidence images is highly challenging due to the fact that the offender’s face or other obvious characteristics such as tattoos are occluded, covered, or not visible at all. Nevertheless, other naked body parts, e.g., chest are still visible. Some researchers proposed skin marks, skin texture, vein or androgenic hair patterns for criminal and victim identification. There are no available studies of nipple-areola complex (NAC) for offender identification. In this paper, we present a study of offender identification based on the NAC, and we present NTU-Nipple-v1 dataset, which contains 2732 images of 428 different male nipple-areolae. Popular deep learning and hand-crafted recognition methods are evaluated on the provided dataset. The results indicate that the NAC can be a useful characteristic for offender identification.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134403375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-12DOI: 10.1109/ICB45273.2019.8987307
A. Ross, Sudipta Banerjee, Cunjian Chen, Anurag Chowdhury, Vahid Mirjalili, Renu Sharma, Thomas Swearingen, Shivangi Yadav
The need for reliably determining the identity of a person is critical in a number of different domains ranging from personal smartphones to border security; from autonomous vehicles to e-voting; from tracking child vaccinations to preventing human trafficking; from crime scene investigation to personalization of customer service. Biometrics, which entails the use of biological attributes such as face, fingerprints and voice for recognizing a person, is being increasingly used in several such applications. While biometric technology has made rapid strides over the past decade, there are several fundamental issues that are yet to be satisfactorily resolved. In this article, we will discuss some of these issues and enumerate some of the exciting challenges in this field.
{"title":"Some Research Problems in Biometrics: The Future Beckons","authors":"A. Ross, Sudipta Banerjee, Cunjian Chen, Anurag Chowdhury, Vahid Mirjalili, Renu Sharma, Thomas Swearingen, Shivangi Yadav","doi":"10.1109/ICB45273.2019.8987307","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987307","url":null,"abstract":"The need for reliably determining the identity of a person is critical in a number of different domains ranging from personal smartphones to border security; from autonomous vehicles to e-voting; from tracking child vaccinations to preventing human trafficking; from crime scene investigation to personalization of customer service. Biometrics, which entails the use of biological attributes such as face, fingerprints and voice for recognizing a person, is being increasingly used in several such applications. While biometric technology has made rapid strides over the past decade, there are several fundamental issues that are yet to be satisfactorily resolved. In this article, we will discuss some of these issues and enumerate some of the exciting challenges in this field.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123696551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-02DOI: 10.1109/ICB45273.2019.8987281
G. Orrú, Roberto Casula, Pierluigi Tuveri, C. Bazzoni, Giovanna Dessalvi, Marco Micheletto, Luca Ghiani, G. Marcialis
The International Fingerprint liveness Detection Competition (LivDet) is an open and well-acknowledged meeting point of academies and private companies that deal with the problem of distinguishing images coming from reproductions of fingerprints made of artificial materials and images relative to real fingerprints. In this edition of LivDet we invited the competitors to propose integrated algorithms with matching systems. The goal was to investigate at which extent this integration impact on the whole performance. Twelve algorithms were submitted to the competition, eight of which worked on integrated systems.
{"title":"LivDet in Action - Fingerprint Liveness Detection Competition 2019","authors":"G. Orrú, Roberto Casula, Pierluigi Tuveri, C. Bazzoni, Giovanna Dessalvi, Marco Micheletto, Luca Ghiani, G. Marcialis","doi":"10.1109/ICB45273.2019.8987281","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987281","url":null,"abstract":"The International Fingerprint liveness Detection Competition (LivDet) is an open and well-acknowledged meeting point of academies and private companies that deal with the problem of distinguishing images coming from reproductions of fingerprints made of artificial materials and images relative to real fingerprints. In this edition of LivDet we invited the competitors to propose integrated algorithms with matching systems. The goal was to investigate at which extent this integration impact on the whole performance. Twelve algorithms were submitted to the competition, eight of which worked on integrated systems.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-01DOI: 10.1109/ICB45273.2019.8987245
Juan E. Tapia, Claudia Arellano
Soft biometric information such as gender can contribute to many applications like as identification and security. This paper explores the use of a Binary Statistical Features (BSIF) algorithm for classifying gender from iris texture images captured with NIR sensors. It uses the same pipeline for iris recognition systems consisting of iris segmentation, normalisation and then classification. Experiments show that applying BSIF is not straightforward since it can create artificial textures causing misclassification. In order to overcome this limitation, a new set of filters was trained from eye images and different sized filters with padding bands were tested on a subject-disjoint database. A Modified-BSIF (MBSIF) method was implemented. The latter achieved better gender classification results (94.6% and 91.33% for the left and right eye respectively). These results are competitive with the state of the art in gender classification. In an additional contribution, a novel gender labelled database was created and it will be available upon request.
{"title":"Gender Classification from Iris Texture Images Using a New Set of Binary Statistical Image Features","authors":"Juan E. Tapia, Claudia Arellano","doi":"10.1109/ICB45273.2019.8987245","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987245","url":null,"abstract":"Soft biometric information such as gender can contribute to many applications like as identification and security. This paper explores the use of a Binary Statistical Features (BSIF) algorithm for classifying gender from iris texture images captured with NIR sensors. It uses the same pipeline for iris recognition systems consisting of iris segmentation, normalisation and then classification. Experiments show that applying BSIF is not straightforward since it can create artificial textures causing misclassification. In order to overcome this limitation, a new set of filters was trained from eye images and different sized filters with padding bands were tested on a subject-disjoint database. A Modified-BSIF (MBSIF) method was implemented. The latter achieved better gender classification results (94.6% and 91.33% for the left and right eye respectively). These results are competitive with the state of the art in gender classification. In an additional contribution, a novel gender labelled database was created and it will be available upon request.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124877661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-15DOI: 10.1109/ICB45273.2019.8987329
Xing Di, B. Riggan, Shuowen Hu, Nathan J. Short, Vishal M. Patel
Polarimetric thermal to visible face verification entails matching two images that contain significant domain differences. Several recent approaches have attempted to synthesize visible faces from thermal images for cross-modal matching. In this paper, we take a different approach in which rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for verification. The proposed synthesis network is based on the self-attention generative adversarial network (SAGAN) which essentially allows efficient attention-guided image synthesis. Extensive experiments on the ARL polarimetric thermal face dataset demonstrate that the proposed method achieves state-of-the-art performance.
{"title":"Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis","authors":"Xing Di, B. Riggan, Shuowen Hu, Nathan J. Short, Vishal M. Patel","doi":"10.1109/ICB45273.2019.8987329","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987329","url":null,"abstract":"Polarimetric thermal to visible face verification entails matching two images that contain significant domain differences. Several recent approaches have attempted to synthesize visible faces from thermal images for cross-modal matching. In this paper, we take a different approach in which rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for verification. The proposed synthesis network is based on the self-attention generative adversarial network (SAGAN) which essentially allows efficient attention-guided image synthesis. Extensive experiments on the ARL polarimetric thermal face dataset demonstrate that the proposed method achieves state-of-the-art performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126916588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-12DOI: 10.1109/ICB45273.2019.8987290
Artur Costa-Pazo, David Jiménez-Cabello, Esteban Vázquez-Fernández, J. Alba-Castro, R. López-Sastre
Over the past few years, Presentation Attack Detection (PAD) has become a fundamental part of facial recognition systems. Although much effort has been devoted to anti-spoofing research, generalization in real scenarios remains a challenge. In this paper we present a new open-source evaluation framework to study the generalization capacity of face PAD methods, coined here as face-GPAD. This framework facilitates the creation of new protocols focused on the generalization problem establishing fair procedures of evaluation and comparison between PAD solutions. We also introduce a large aggregated and categorized dataset to address the problem of incompatibility between publicly available datasets. Finally, we propose a benchmark adding two novel evaluation protocols: one for measuring the effect introduced by the variations in face resolution, and the second for evaluating the influence of adversarial operating conditions.
{"title":"Generalized Presentation Attack Detection: a face anti-spoofing evaluation proposal","authors":"Artur Costa-Pazo, David Jiménez-Cabello, Esteban Vázquez-Fernández, J. Alba-Castro, R. López-Sastre","doi":"10.1109/ICB45273.2019.8987290","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987290","url":null,"abstract":"Over the past few years, Presentation Attack Detection (PAD) has become a fundamental part of facial recognition systems. Although much effort has been devoted to anti-spoofing research, generalization in real scenarios remains a challenge. In this paper we present a new open-source evaluation framework to study the generalization capacity of face PAD methods, coined here as face-GPAD. This framework facilitates the creation of new protocols focused on the generalization problem establishing fair procedures of evaluation and comparison between PAD solutions. We also introduce a large aggregated and categorized dataset to address the problem of incompatibility between publicly available datasets. Finally, we propose a benchmark adding two novel evaluation protocols: one for measuring the effect introduced by the variations in face resolution, and the second for evaluating the influence of adversarial operating conditions.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132295627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}