Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339036
K. Hollingsworth, K. W. Bowyer, P. Flynn
The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are of equal value. A bit is deemed fragile if it varies in value across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score-fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. This is the first and only work that we are aware of to use the coincidence of fragile bit locations to improve the accuracy of matches.
{"title":"Using fragile bit coincidence to improve iris recognition","authors":"K. Hollingsworth, K. W. Bowyer, P. Flynn","doi":"10.1109/BTAS.2009.5339036","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339036","url":null,"abstract":"The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are of equal value. A bit is deemed fragile if it varies in value across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score-fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. This is the first and only work that we are aware of to use the coincidence of fragile bit locations to improve the accuracy of matches.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123950084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339084
Mei-Chen Yeh, S. Zhang, K. Cheng
Face annotation of photos, a key enabling technology for many exciting new applications, has been gaining broad interest. The task is different from the general face recognition problem because the dataset is not constrained — an unlabelled face may not have any corresponding match in the training set. Moreover, faces in real-life photos have a significantly wider variation range than those in the conventional face datasets. We designed and conducted a thorough experimental study to understand the efficacy of face recognition methods for annotating faces in real-world scenarios. The findings of this study should provide information for various design choices for a practical and high-accuracy face annotation system.
{"title":"An experimental study on content-based face annotation of photos","authors":"Mei-Chen Yeh, S. Zhang, K. Cheng","doi":"10.1109/BTAS.2009.5339084","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339084","url":null,"abstract":"Face annotation of photos, a key enabling technology for many exciting new applications, has been gaining broad interest. The task is different from the general face recognition problem because the dataset is not constrained — an unlabelled face may not have any corresponding match in the training set. Moreover, faces in real-life photos have a significantly wider variation range than those in the conventional face datasets. We designed and conducted a thorough experimental study to understand the efficacy of face recognition methods for annotating faces in real-world scenarios. The findings of this study should provide information for various design choices for a practical and high-accuracy face annotation system.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124026078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339064
Hai-yun Xu, R. Veldhuis
Many fingerprint recognition systems are based on minutiae matching. However, the recognition accuracy of minutiae-based matching algorithms is highly dependent on the fingerprint minutiae quality. Therefore, in this paper, we introduce a quality integrated spectral minutiae algorithm, in which the minutiae quality information is incorporated to enhance the performance of the spectral minutiae fingerprint recognition system. In our algorithm, two types of quality data are used. The first one is the minutiae reliability, expressing the probability that a given point is indeed a minutia; the second one is the minutiae location accuracy, quantifying the error on the minutiae location. We integrate these two types of quality information into the spectral minutiae representation algorithm and achieve a decrease in the Equal Error Rate of over 20% in the experiment.
{"title":"Spectral minutiae representations of fingerprints enhanced by quality data","authors":"Hai-yun Xu, R. Veldhuis","doi":"10.1109/BTAS.2009.5339064","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339064","url":null,"abstract":"Many fingerprint recognition systems are based on minutiae matching. However, the recognition accuracy of minutiae-based matching algorithms is highly dependent on the fingerprint minutiae quality. Therefore, in this paper, we introduce a quality integrated spectral minutiae algorithm, in which the minutiae quality information is incorporated to enhance the performance of the spectral minutiae fingerprint recognition system. In our algorithm, two types of quality data are used. The first one is the minutiae reliability, expressing the probability that a given point is indeed a minutia; the second one is the minutiae location accuracy, quantifying the error on the minutiae location. We integrate these two types of quality information into the spectral minutiae representation algorithm and achieve a decrease in the Equal Error Rate of over 20% in the experiment.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127888483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339030
Tao Wu, R. Chellappa
In applications such as document understanding, only binary face images may be available as inputs to a face recognition (FR) algorithm. In this paper, we investigate the effects of the number of grey levels on PCA, multiple exemplar discriminant analysis (MEDA) and the elastic bunch graph matching (EBGM) FR algorithms. The inputs to these FR algorithms are quantized images (binary images or images with small number of grey levels) modified by distance and Box-Cox transforms. The performances of PCA and MEDA algorithms are at 87.66% for images in FRGC version 1 experiment 1 after they are thresholded and transformed while the EBGM algorithm achieves only 37.5%. In many document understanding applications, it is also required to verify a degraded low-quality image against a high-quality image, both of which are from the same source. For this problem, the performances of PCA and MEDA are stable when the images were degraded by noise, downsampling or different thresholding parameters.
在文档理解等应用中,只有二值人脸图像可以作为人脸识别(FR)算法的输入。本文研究了灰色等级数对主成分分析(PCA)、多样例判别分析(MEDA)和弹性束图匹配(EBGM) FR算法的影响。这些FR算法的输入是经过距离和Box-Cox变换修改的量化图像(二值图像或具有少量灰度级的图像)。在FRGC version 1实验1中,经过阈值化和变换后的图像,PCA和MEDA算法的性能达到87.66%,而EBGM算法的性能仅为37.5%。在许多文档理解应用程序中,还需要将降级的低质量图像与来自同一来源的高质量图像进行验证。对于该问题,当图像受到噪声、降采样或不同阈值参数的影响时,PCA和MEDA的性能是稳定的。
{"title":"Recognition of quantized still face images","authors":"Tao Wu, R. Chellappa","doi":"10.1109/BTAS.2009.5339030","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339030","url":null,"abstract":"In applications such as document understanding, only binary face images may be available as inputs to a face recognition (FR) algorithm. In this paper, we investigate the effects of the number of grey levels on PCA, multiple exemplar discriminant analysis (MEDA) and the elastic bunch graph matching (EBGM) FR algorithms. The inputs to these FR algorithms are quantized images (binary images or images with small number of grey levels) modified by distance and Box-Cox transforms. The performances of PCA and MEDA algorithms are at 87.66% for images in FRGC version 1 experiment 1 after they are thresholded and transformed while the EBGM algorithm achieves only 37.5%. In many document understanding applications, it is also required to verify a degraded low-quality image against a high-quality image, both of which are from the same source. For this problem, the performances of PCA and MEDA are stable when the images were degraded by noise, downsampling or different thresholding parameters.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129320612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339081
A. Abaza, A. Ross
Multibiometric systems fuse evidences from multiple biometric sources typically resulting in better recognition accuracy. These systems can consolidate information at various levels. For systems operating in the identification mode, rank level fusion presents a viable option. In this paper, several simple but powerful modifications are suggested to enhance the performance of rank-level fusion schemes in the presence of weak classifiers or low quality input images. These modifications do not require a training phase, therefore making them suitable in a wide range of applications. Experiments conducted on a multimodal database consisting of a few hundred users indicate that the suggested modifications to the highest rank and Borda count methods significantly enhance the rank-1 accuracy. Experiments also reveal that including image quality in the fusion scheme enhances the Borda count rank-1 accuracy by ~40%.
{"title":"Quality based rank-level fusion in multibiometric systems","authors":"A. Abaza, A. Ross","doi":"10.1109/BTAS.2009.5339081","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339081","url":null,"abstract":"Multibiometric systems fuse evidences from multiple biometric sources typically resulting in better recognition accuracy. These systems can consolidate information at various levels. For systems operating in the identification mode, rank level fusion presents a viable option. In this paper, several simple but powerful modifications are suggested to enhance the performance of rank-level fusion schemes in the presence of weak classifiers or low quality input images. These modifications do not require a training phase, therefore making them suitable in a wide range of applications. Experiments conducted on a multimodal database consisting of a few hundred users indicate that the suggested modifications to the highest rank and Borda count methods significantly enhance the rank-1 accuracy. Experiments also reveal that including image quality in the fusion scheme enhances the Borda count rank-1 accuracy by ~40%.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114783201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339072
A. Ross, R. Pasula, L. Hornak
Most iris recognition systems acquire images of the eye in the 700nm–900nm range of the electromagnetic spectrum. In this work, the iris is examined at wavelengths beyond 900nm. The purpose is to understand the iris structure at longer wavelengths and to determine the possibility of performing cross-spectral iris matching. An acquisition system is first designed for imaging the iris at narrow spectral bands in the 950 nm–1650 nm range. Next, the left and right images of the iris are acquired from 25 subjects in order to conduct the analysis. Finally, the possibility of performing cross-spectral matching and multispectral fusion at the match score level is investigated. Experimental results suggest: (a) the feasibility of acquiring iris images in wavelengths beyond 900nm using InGaAs detectors; (b) the possibility of observing different structures in the iris anatomy at various wavelengths; and (c) the potential of performing cross-spectral matching and multispectral fusion for enhanced iris recognition.
{"title":"Exploring multispectral iris recognition beyond 900nm","authors":"A. Ross, R. Pasula, L. Hornak","doi":"10.1109/BTAS.2009.5339072","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339072","url":null,"abstract":"Most iris recognition systems acquire images of the eye in the 700nm–900nm range of the electromagnetic spectrum. In this work, the iris is examined at wavelengths beyond 900nm. The purpose is to understand the iris structure at longer wavelengths and to determine the possibility of performing cross-spectral iris matching. An acquisition system is first designed for imaging the iris at narrow spectral bands in the 950 nm–1650 nm range. Next, the left and right images of the iris are acquired from 25 subjects in order to conduct the analysis. Finally, the possibility of performing cross-spectral matching and multispectral fusion at the match score level is investigated. Experimental results suggest: (a) the feasibility of acquiring iris images in wavelengths beyond 900nm using InGaAs detectors; (b) the possibility of observing different structures in the iris anatomy at various wavelengths; and (c) the potential of performing cross-spectral matching and multispectral fusion for enhanced iris recognition.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126141230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339029
M. Theofanos, Brian C. Stanton, Yee-Yin Choong, R. Micheals
It's easy to take a face photograph, isn't it? What if that photograph is intended for input into an automatic face recognition system? Why is it then, even with a "captive audience" such as at a border crossing or point of entry, are so many face photographs unsuitable for face recognition? In a previous study, the authors suggested the use of a "face overlay" — a visual reticule that may be superimposed onto a live video feed to facilitate the face image capture process. In this study, we provide a detailed qualitative and quantitative analysis of the affordance, efficiency, effectiveness, and user-satisfaction of the visual overlay. Results of a controlled usability study suggest that the overlay improves face image quality, even when photographers are provided with no prior training.
{"title":"Usability testing of an overlay to improve face capture","authors":"M. Theofanos, Brian C. Stanton, Yee-Yin Choong, R. Micheals","doi":"10.1109/BTAS.2009.5339029","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339029","url":null,"abstract":"It's easy to take a face photograph, isn't it? What if that photograph is intended for input into an automatic face recognition system? Why is it then, even with a \"captive audience\" such as at a border crossing or point of entry, are so many face photographs unsuitable for face recognition? In a previous study, the authors suggested the use of a \"face overlay\" — a visual reticule that may be superimposed onto a live video feed to facilitate the face image capture process. In this study, we provide a detailed qualitative and quantitative analysis of the affordance, efficiency, effectiveness, and user-satisfaction of the visual overlay. Results of a controlled usability study suggest that the overlay improves face image quality, even when photographers are provided with no prior training.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126541920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339032
Julien Bohné, V. Despiegel
In this paper, we present a new fingerprint matching algorithm based on a local skeleton descriptor. This descriptor uses ridge count information to encode minutiae locations in a small neighborhood. Taking advantage of ridge count properties, our descriptor is robust to distortions. We developed an efficient algorithm to match our descriptor and a strategy to combine matchings of many local descriptors. Our algorithm obtains interesting results on both tenprint-to-tenprint and latent-to-tenprint matchings.
{"title":"Fingerprint skeleton matching based on local descriptor","authors":"Julien Bohné, V. Despiegel","doi":"10.1109/BTAS.2009.5339032","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339032","url":null,"abstract":"In this paper, we present a new fingerprint matching algorithm based on a local skeleton descriptor. This descriptor uses ridge count information to encode minutiae locations in a small neighborhood. Taking advantage of ridge count properties, our descriptor is robust to distortions. We developed an efficient algorithm to match our descriptor and a strategy to combine matchings of many local descriptors. Our algorithm obtains interesting results on both tenprint-to-tenprint and latent-to-tenprint matchings.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339017
C. Blomeke, S. Elliott, Benny Senjaya, G. Hales
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance dataset.
{"title":"A comparison of fingerprint image quality and matching performance between healthcare and general populations","authors":"C. Blomeke, S. Elliott, Benny Senjaya, G. Hales","doi":"10.1109/BTAS.2009.5339017","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339017","url":null,"abstract":"Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance dataset.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123371366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-28DOI: 10.1109/BTAS.2009.5339055
P. Phillips, J. Beveridge
This paper introduces the concept of biometric-completeness. A problem is biometric-complete if solving the problem is “equivalent” to solving a biometric recognition problem. The concept of biometric-completeness is modeled on the informal concept of artificial intelligence (AI) completeness. The concept of biometric-completeness is illustrated by showing a formal equivalence between biometric recognition and quality assessment of biometric samples. The model allows for the inclusion of quality of biometric samples in verification decisions. The model includes most methods for incorporating quality into biometric systems. The key result in this paper shows that finding the perfect quality measure for any algorithm is equivalent to finding the perfect verification algorithm. Two results that follow from the main result are: finding the perfect quality measure is equivalent to solving the open-set and closed-set identification problems; and that a universal perfect quality measure cannot exist.
{"title":"An introduction to biometric-completeness: The equivalence of matching and quality","authors":"P. Phillips, J. Beveridge","doi":"10.1109/BTAS.2009.5339055","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339055","url":null,"abstract":"This paper introduces the concept of biometric-completeness. A problem is biometric-complete if solving the problem is “equivalent” to solving a biometric recognition problem. The concept of biometric-completeness is modeled on the informal concept of artificial intelligence (AI) completeness. The concept of biometric-completeness is illustrated by showing a formal equivalence between biometric recognition and quality assessment of biometric samples. The model allows for the inclusion of quality of biometric samples in verification decisions. The model includes most methods for incorporating quality into biometric systems. The key result in this paper shows that finding the perfect quality measure for any algorithm is equivalent to finding the perfect verification algorithm. Two results that follow from the main result are: finding the perfect quality measure is equivalent to solving the open-set and closed-set identification problems; and that a universal perfect quality measure cannot exist.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130826813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}