Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00039
P. Drozdowski, F. Struck, C. Rathgeb, C. Busch
Eyeglasses change the appearance and visual perception of facial images. Moreover, under objective metrics, glasses generally deteriorate the sample quality of near-infrared ocular images and as a consequence can worsen the biometric performance of iris recognition systems. Automatic detection of glasses is therefore one of the prerequisites for a sufficient quality, interactive sample acquisition process in an automatic iris recognition system. In this paper, three approaches (i.e. a statistical method, a deep learning based method and an algorithmic method based on detection of edges and reflections) for automatic detection of glasses in near-infrared iris images are presented. Those approaches are evaluated using cross-validation on the CASIA-IrisV4-Thousand dataset, which contains 20000 images from 1000 subjects. Individually, they are capable of correctly classifying 95-98% of images, while a majority vote based fusion of the three approaches achieves a correct classification rate (CCR) of 99.54%.
{"title":"Detection of Glasses in Near-Infrared Ocular Images","authors":"P. Drozdowski, F. Struck, C. Rathgeb, C. Busch","doi":"10.1109/ICB2018.2018.00039","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00039","url":null,"abstract":"Eyeglasses change the appearance and visual perception of facial images. Moreover, under objective metrics, glasses generally deteriorate the sample quality of near-infrared ocular images and as a consequence can worsen the biometric performance of iris recognition systems. Automatic detection of glasses is therefore one of the prerequisites for a sufficient quality, interactive sample acquisition process in an automatic iris recognition system. In this paper, three approaches (i.e. a statistical method, a deep learning based method and an algorithmic method based on detection of edges and reflections) for automatic detection of glasses in near-infrared iris images are presented. Those approaches are evaluated using cross-validation on the CASIA-IrisV4-Thousand dataset, which contains 20000 images from 1000 subjects. Individually, they are capable of correctly classifying 95-98% of images, while a majority vote based fusion of the three approaches achieves a correct classification rate (CCR) of 99.54%.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132401104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00033
Brianna Maze, Jocelyn C. Adams, James A. Duncan, N. Kalka, Tim Miller, C. Otto, Anil K. Jain, W. T. Niggel, Janet Anderson, J. Cheney, P. Grother
Although considerable work has been done in recent years to drive the state of the art in facial recognition towards operation on fully unconstrained imagery, research has always been restricted by a lack of datasets in the public domain. In addition, traditional biometrics experiments such as single image verification and closed set recognition do not adequately evaluate the ways in which unconstrained face recognition systems are used in practice. The IARPA Janus Benchmark–C (IJB-C) face dataset advances the goal of robust unconstrained face recognition, improving upon the previous public domain IJB-B dataset, by increasing dataset size and variability, and by introducing end-to-end protocols that more closely model operational face recognition use cases. IJB-C adds 1,661 new subjects to the 1,870 subjects released in IJB-B, with increased emphasis on occlusion and diversity of subject occupation and geographic origin with the goal of improving representation of the global population. Annotations on IJB-C imagery have been expanded to allow for further covariate analysis, including a spatial occlusion grid to standardize analysis of occlusion. Due to these enhancements, the IJB-C dataset is significantly more challenging than other datasets in the public domain and will advance the state of the art in unconstrained face recognition.
{"title":"IARPA Janus Benchmark - C: Face Dataset and Protocol","authors":"Brianna Maze, Jocelyn C. Adams, James A. Duncan, N. Kalka, Tim Miller, C. Otto, Anil K. Jain, W. T. Niggel, Janet Anderson, J. Cheney, P. Grother","doi":"10.1109/ICB2018.2018.00033","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00033","url":null,"abstract":"Although considerable work has been done in recent years to drive the state of the art in facial recognition towards operation on fully unconstrained imagery, research has always been restricted by a lack of datasets in the public domain. In addition, traditional biometrics experiments such as single image verification and closed set recognition do not adequately evaluate the ways in which unconstrained face recognition systems are used in practice. The IARPA Janus Benchmark–C (IJB-C) face dataset advances the goal of robust unconstrained face recognition, improving upon the previous public domain IJB-B dataset, by increasing dataset size and variability, and by introducing end-to-end protocols that more closely model operational face recognition use cases. IJB-C adds 1,661 new subjects to the 1,870 subjects released in IJB-B, with increased emphasis on occlusion and diversity of subject occupation and geographic origin with the goal of improving representation of the global population. Annotations on IJB-C imagery have been expanded to allow for further covariate analysis, including a spatial occlusion grid to standardize analysis of occlusion. Due to these enhancements, the IJB-C dataset is significantly more challenging than other datasets in the public domain and will advance the state of the art in unconstrained face recognition.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132629689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00041
Dana Michalski, Sau Yee Yiu, C. Malec
Facial recognition across ageing and in particular with images of children remains a challenging problem in a wide of range of operational settings. Yet, research examining algorithm performance with images of children is limited with minimal understanding of how age and age variation (i.e., age difference between images being compared) impacts on performance. Operationally, a fixed threshold based on images of adults may be used without considering that this could impact on performance with children. Threshold variation based on age and age variation may be a better approach when comparing images of children. This paper evaluates the performance of a commercial off-the-shelf (COTS) facial recognition algorithm to determine the impact that age (0–17 years) and age variation (0–10 years) has on a controlled operational dataset of facial images using both a fixed threshold and threshold variation approach. This evaluation shows that performance of children differs considerably across age and age variation, and in some operational settings, threshold variation may be beneficial for conducting facial recognition with children.
{"title":"The Impact of Age and Threshold Variation on Facial Recognition Algorithm Performance Using Images of Children","authors":"Dana Michalski, Sau Yee Yiu, C. Malec","doi":"10.1109/ICB2018.2018.00041","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00041","url":null,"abstract":"Facial recognition across ageing and in particular with images of children remains a challenging problem in a wide of range of operational settings. Yet, research examining algorithm performance with images of children is limited with minimal understanding of how age and age variation (i.e., age difference between images being compared) impacts on performance. Operationally, a fixed threshold based on images of adults may be used without considering that this could impact on performance with children. Threshold variation based on age and age variation may be a better approach when comparing images of children. This paper evaluates the performance of a commercial off-the-shelf (COTS) facial recognition algorithm to determine the impact that age (0–17 years) and age variation (0–10 years) has on a controlled operational dataset of facial images using both a fixed threshold and threshold variation approach. This evaluation shows that performance of children differs considerably across age and age variation, and in some operational settings, threshold variation may be beneficial for conducting facial recognition with children.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123481623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00016
Kai Cao, Anil K. Jain
A database of a large number of fingerprint images is highly desired for designing and evaluating large scale fingerprint search algorithms. Compared to collecting a large number of real fingerprints, which is very costly in terms of time, effort and expense, and also involves stringent privacy issues, synthetic fingerprints can be generated at low cost and does not have any privacy issues to deal with. However, it is essential to show that the characteristics and appearance of real and synthetic fingerprint images are sufficiently similar. We propose a Generative Adversarial Network (GAN) to generate 512X512 rolled fingerprint images. Our generative model for rolled fingerprints is highly efficient (12ms/image) with characteristics of synthetic rolled prints close to real rolled images. Experimental results show that our model captures the properties of real rolled fingerprints in terms of (i) fingerprint image quality, (ii) distinctiveness and (iii) minutiae configuration. Our synthetic fingerprint images are more realistic than other approaches.
{"title":"Fingerprint Synthesis: Evaluating Fingerprint Search at Scale","authors":"Kai Cao, Anil K. Jain","doi":"10.1109/ICB2018.2018.00016","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00016","url":null,"abstract":"A database of a large number of fingerprint images is highly desired for designing and evaluating large scale fingerprint search algorithms. Compared to collecting a large number of real fingerprints, which is very costly in terms of time, effort and expense, and also involves stringent privacy issues, synthetic fingerprints can be generated at low cost and does not have any privacy issues to deal with. However, it is essential to show that the characteristics and appearance of real and synthetic fingerprint images are sufficiently similar. We propose a Generative Adversarial Network (GAN) to generate 512X512 rolled fingerprint images. Our generative model for rolled fingerprints is highly efficient (12ms/image) with characteristics of synthetic rolled prints close to real rolled images. Experimental results show that our model captures the properties of real rolled fingerprints in terms of (i) fingerprint image quality, (ii) distinctiveness and (iii) minutiae configuration. Our synthetic fingerprint images are more realistic than other approaches.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124899604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00047
Kalaivani Sundararajan, T. Neal, D. Woodard
In this paper, we investigate the challenges of using a person's writing style as a cognitive biometric modality by applying Doddington's idea of Biometric menagerie. To the best of our knowledge, biometric menagerie analysis has been on performed on a cognitive biometric modality for the first time. The presence of goats, wolves and lambs in this modality is demonstrated using two publicly available datasets - Blogs and IMDB1M. To combat this challenging problem, we further propose using person-specific features referred to as "Style signatures" which may be better at distinguishing different individuals. Experimental results show that using person-specific Style signatures improve verification by 3.6-5.5% on both datasets.
{"title":"Style Signatures to Combat Biometric Menagerie in Stylometry","authors":"Kalaivani Sundararajan, T. Neal, D. Woodard","doi":"10.1109/ICB2018.2018.00047","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00047","url":null,"abstract":"In this paper, we investigate the challenges of using a person's writing style as a cognitive biometric modality by applying Doddington's idea of Biometric menagerie. To the best of our knowledge, biometric menagerie analysis has been on performed on a cognitive biometric modality for the first time. The presence of goats, wolves and lambs in this modality is demonstrated using two publicly available datasets - Blogs and IMDB1M. To combat this challenging problem, we further propose using person-specific features referred to as \"Style signatures\" which may be better at distinguishing different individuals. Experimental results show that using person-specific Style signatures improve verification by 3.6-5.5% on both datasets.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"275 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114090464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00015
Watcharapong Chaidee, K. Horapong, V. Areekul
We introduce a pre-enhancement algorithm to improve efficiency of the automatic fingerprint identification systems (AFIS) for latent fingerprint search. The proposed algorithm employs learning to construct a spectral dictionary from spectral responses of a Gabor filter bank in the frequency domain. Given an input latent fingerprint, the spectral dictionary yields a set of appropriate filters for each partitioning window of the entire latent fingerprint image. The proposed set of spectral filters helps improve and preserve highly-curved ridges in region around the singular point, while the other methods fail. The proposed method outperforms state-of-the-art algorithms in identification accuracy with the good and bad cases of the NIST SD27 latent fingerprint database.
{"title":"Filter Design Based on Spectral Dictionary for Latent Fingerprint Pre-enhancement","authors":"Watcharapong Chaidee, K. Horapong, V. Areekul","doi":"10.1109/ICB2018.2018.00015","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00015","url":null,"abstract":"We introduce a pre-enhancement algorithm to improve efficiency of the automatic fingerprint identification systems (AFIS) for latent fingerprint search. The proposed algorithm employs learning to construct a spectral dictionary from spectral responses of a Gabor filter bank in the frequency domain. Given an input latent fingerprint, the spectral dictionary yields a set of appropriate filters for each partitioning window of the entire latent fingerprint image. The proposed set of spectral filters helps improve and preserve highly-curved ridges in region around the singular point, while the other methods fail. The proposed method outperforms state-of-the-art algorithms in identification accuracy with the good and bad cases of the NIST SD27 latent fingerprint database.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127446460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00028
Nishant Sankaran, S. Tulyakov, S. Setlur, V. Govindaraju
This paper presents a novel approach to feature aggregation for template/set based face recognition by incorporating metadata regarding face images to evaluate the representativeness of a feature in the template. We propose using orthogonal data like yaw, pitch, face size, etc. to augment the capacity of deep neural networks to find stronger correlations between the relative quality of the face image in the set with the match performance. The approach presented employs a siamese architecture for training on features and metadata generated using other state-of-the-art CNNs and learns an effective feature fusion strategy for producing optimal face verification performance. We obtain substantial improvements in TAR of over 1.5% at 10^-4 FAR as compared to traditional pooling approaches and illustrate the efficacy of the quality assessment made by the network on the two challenging datasets IJB-A and IARPA Janus CS4.
{"title":"Metadata-Based Feature Aggregation Network for Face Recognition","authors":"Nishant Sankaran, S. Tulyakov, S. Setlur, V. Govindaraju","doi":"10.1109/ICB2018.2018.00028","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00028","url":null,"abstract":"This paper presents a novel approach to feature aggregation for template/set based face recognition by incorporating metadata regarding face images to evaluate the representativeness of a feature in the template. We propose using orthogonal data like yaw, pitch, face size, etc. to augment the capacity of deep neural networks to find stronger correlations between the relative quality of the face image in the set with the match performance. The approach presented employs a siamese architecture for training on features and metadata generated using other state-of-the-art CNNs and learns an effective feature fusion strategy for producing optimal face verification performance. We obtain substantial improvements in TAR of over 1.5% at 10^-4 FAR as compared to traditional pooling approaches and illustrate the efficacy of the quality assessment made by the network on the two challenging datasets IJB-A and IARPA Janus CS4.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134401157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00037
Torsten Schlett, C. Rathgeb, C. Busch
While traditional iris recognition systems operate using near-infrared images, visible wavelength approaches have gained attention in recent years due to a variety of reasons, such as the deployment of iris recognition in consumer grade mobile devices. Iris segmentation, the process of localizing the iris part of an image, is a vital step in iris recognition. The segmentation of the iris usually involves a detection of inner and outer iris boundaries, a detection of eyelids, an exclusion of eyelashes as well as contact lens rings and a scrubbing of specular reflections. This work presents a comprehensive multi-spectral analysis to improve iris segmentation accuracy in visible wavelengths by transforming iris images before their segmentation, which is done by extracting spectral components in form of RGB color channels. The procedure is evaluated by utilizing the MobBIO dataset, open-source iris segmentation tools, and the NICE.I error measures. Additionally, a segmentation-level fusion procedure based on existing work is performed; an eye color analysis is examined, with no clear connection to the multi-spectral procedure being found; and another analysis highlights further potential improvement by assuming perfect selection within various multi-spectral segmentation result sets.
{"title":"Multi-spectral Iris Segmentation in Visible Wavelengths","authors":"Torsten Schlett, C. Rathgeb, C. Busch","doi":"10.1109/ICB2018.2018.00037","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00037","url":null,"abstract":"While traditional iris recognition systems operate using near-infrared images, visible wavelength approaches have gained attention in recent years due to a variety of reasons, such as the deployment of iris recognition in consumer grade mobile devices. Iris segmentation, the process of localizing the iris part of an image, is a vital step in iris recognition. The segmentation of the iris usually involves a detection of inner and outer iris boundaries, a detection of eyelids, an exclusion of eyelashes as well as contact lens rings and a scrubbing of specular reflections. This work presents a comprehensive multi-spectral analysis to improve iris segmentation accuracy in visible wavelengths by transforming iris images before their segmentation, which is done by extracting spectral components in form of RGB color channels. The procedure is evaluated by utilizing the MobBIO dataset, open-source iris segmentation tools, and the NICE.I error measures. Additionally, a segmentation-level fusion procedure based on existing work is performed; an eye color analysis is examined, with no clear connection to the multi-spectral procedure being found; and another analysis highlights further potential improvement by assuming perfect selection within various multi-spectral segmentation result sets.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130386328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00051
C. Rathgeb, Torsten Schlett, Nicolas Buchmann, Harald Baier, C. Busch
When multiple image samples of a single eye are captured during enrolment the accuracy of iris recognition systems can be substantially improved. However, storage requirement significantly increases if the system stores multiple iris images per enrolled eye. We consider this practical scenario and provide a comparative study on the usefulness of relevant image compression algorithms, i.e. JPEG, JPEG 2000 and the more recently introduced Better Portable Graphics (BPG) algorithm, which is based on a subset of the High Efficiency Video Coding (HEVC) standard. We propose a HEVC-based multi-sample compression which takes advantage of inter-frame prediction to achieve a more compact storage of iris images. Experiments on cropped iris images of the IITDv1 and the CASIAv4-Interval datasets confirm the usefulness of the presented approach. Compared to a separate storage of multiple BPG encoded images of size 2 to 3 KB the required storage space can be reduced by at least 30% if images are acquired in a single session. Similarly, at constant file sizes a relative enhancement of image quality of at least 5% in terms of PSNR is achieved. Compared to the widely recommended JPEG 2000 compression, obtained performance gains become even more pronounced. Gains with respect to image quality are also reflected in experiments on recognition performance.
{"title":"Multi-sample Compression of Iris Images Using High Efficiency Video Coding","authors":"C. Rathgeb, Torsten Schlett, Nicolas Buchmann, Harald Baier, C. Busch","doi":"10.1109/ICB2018.2018.00051","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00051","url":null,"abstract":"When multiple image samples of a single eye are captured during enrolment the accuracy of iris recognition systems can be substantially improved. However, storage requirement significantly increases if the system stores multiple iris images per enrolled eye. We consider this practical scenario and provide a comparative study on the usefulness of relevant image compression algorithms, i.e. JPEG, JPEG 2000 and the more recently introduced Better Portable Graphics (BPG) algorithm, which is based on a subset of the High Efficiency Video Coding (HEVC) standard. We propose a HEVC-based multi-sample compression which takes advantage of inter-frame prediction to achieve a more compact storage of iris images. Experiments on cropped iris images of the IITDv1 and the CASIAv4-Interval datasets confirm the usefulness of the presented approach. Compared to a separate storage of multiple BPG encoded images of size 2 to 3 KB the required storage space can be reduced by at least 30% if images are acquired in a single session. Similarly, at constant file sizes a relative enhancement of image quality of at least 5% in terms of PSNR is achieved. Compared to the widely recommended JPEG 2000 compression, obtained performance gains become even more pronounced. Gains with respect to image quality are also reflected in experiments on recognition performance.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122117607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/ICB2018.2018.00036
Timotheos Samartzidis, Dirk Siegmund, Michael Gödde, N. Damer, Andreas Braun, Arjan Kuijper
Facial recognition in the visible spectrum is a widely used application but it is also still a major field of research. In this paper we present melanin face pigmentation (MFP) as a new modality to be used to extend classical face biometrics. Melanin pigmentation are sun-damaged cells that occur as revealed and/or unrevealed pattern on human skin. Most MFP can be found in the faces of some people when using ultraviolet (UV) imaging. To proof the relevance of this feature for biometrics, we present a novel image dataset of 91 multiethnic subjects in both, the visible and the UV spectrum. We show a method to extract the MFP features from the UV images, using the well known SURF features and compare it with other techniques. In order to proof its benefits, we use weighted score-level fusion and evaluate the performance in an one against all comparison. As a result we observed a significant amplification of performance where traditional face recognition in the visible spectrum is extended with MFP from UV images. We conclude with a future perspective about the use of these features for future research and discuss observed issues and limitations.
{"title":"The Dark Side of the Face: Exploring the Ultraviolet Spectrum for Face Biometrics","authors":"Timotheos Samartzidis, Dirk Siegmund, Michael Gödde, N. Damer, Andreas Braun, Arjan Kuijper","doi":"10.1109/ICB2018.2018.00036","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00036","url":null,"abstract":"Facial recognition in the visible spectrum is a widely used application but it is also still a major field of research. In this paper we present melanin face pigmentation (MFP) as a new modality to be used to extend classical face biometrics. Melanin pigmentation are sun-damaged cells that occur as revealed and/or unrevealed pattern on human skin. Most MFP can be found in the faces of some people when using ultraviolet (UV) imaging. To proof the relevance of this feature for biometrics, we present a novel image dataset of 91 multiethnic subjects in both, the visible and the UV spectrum. We show a method to extract the MFP features from the UV images, using the well known SURF features and compare it with other techniques. In order to proof its benefits, we use weighted score-level fusion and evaluate the performance in an one against all comparison. As a result we observed a significant amplification of performance where traditional face recognition in the visible spectrum is extended with MFP from UV images. We conclude with a future perspective about the use of these features for future research and discuss observed issues and limitations.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114784876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}