Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117516
O. K. Manyam, Neeraj Kumar, P. Belhumeur, D. Kriegman
Face recognition systems classically recognize people individually. When presented with a group photograph containing multiple people, such systems implicitly assume statistical independence between each detected face. We question this basic assumption and consider instead that there is a dependence between face regions from the same image; after all, the image was acquired with a single camera, under consistent lighting (distribution, direction, spectrum), camera motion, and scene/camera geometry. Such naturally occurring commonalities between face images can be exploited when recognition decisions are made jointly across the faces, rather than independently. Furthermore, when recognizing people in isolation, some features such as color are usually uninformative in unconstrained settings. But by considering pairs of people, the relative color difference provides valuable information. This paper reconsiders the independence assumption, introduces new features and methods for recognizing pairs of individuals in group photographs, and demonstrates a marked improvement when these features are used in joint decision making vs. independent decision making. While these features alone are only moderately discriminative, we combine these new features with state-of-art attribute features and demonstrate effective recognition performance. Initial experiments on two datasets show promising improvements in accuracy.
{"title":"Two faces are better than one: Face recognition in group photographs","authors":"O. K. Manyam, Neeraj Kumar, P. Belhumeur, D. Kriegman","doi":"10.1109/IJCB.2011.6117516","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117516","url":null,"abstract":"Face recognition systems classically recognize people individually. When presented with a group photograph containing multiple people, such systems implicitly assume statistical independence between each detected face. We question this basic assumption and consider instead that there is a dependence between face regions from the same image; after all, the image was acquired with a single camera, under consistent lighting (distribution, direction, spectrum), camera motion, and scene/camera geometry. Such naturally occurring commonalities between face images can be exploited when recognition decisions are made jointly across the faces, rather than independently. Furthermore, when recognizing people in isolation, some features such as color are usually uninformative in unconstrained settings. But by considering pairs of people, the relative color difference provides valuable information. This paper reconsiders the independence assumption, introduces new features and methods for recognizing pairs of individuals in group photographs, and demonstrates a marked improvement when these features are used in joint decision making vs. independent decision making. While these features alone are only moderately discriminative, we combine these new features with state-of-art attribute features and demonstrate effective recognition performance. Initial experiments on two datasets show promising improvements in accuracy.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125100639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117514
S. Biswas, G. Aggarwal, P. Flynn
Low-resolution surveillance videos with uncontrolled pose and illumination present a significant challenge to both face tracking and recognition algorithms. Considerable appearance difference between the probe videos and high-resolution controlled images in the gallery acquired during enrollment makes the problem even harden In this paper, we extend the simultaneous tracking and recognition framework [22] to address the problem of matching high-resolution gallery images with surveillance quality probe videos. We propose using a learning-based likelihood measurement model to handle the large appearance and resolution difference between the gallery images and probe videos. The measurement model consists of a mapping which transforms the gallery and probe features to a space in which their inter-Euclidean distances approximate the distances that would have been obtained had all the descriptors been computed from good quality frontal images. Experimental results on real surveillance quality videos and comparisons with related approaches show the effectiveness of the proposed framework.
{"title":"Face recognition in low-resolution videos using learning-based likelihood measurement model","authors":"S. Biswas, G. Aggarwal, P. Flynn","doi":"10.1109/IJCB.2011.6117514","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117514","url":null,"abstract":"Low-resolution surveillance videos with uncontrolled pose and illumination present a significant challenge to both face tracking and recognition algorithms. Considerable appearance difference between the probe videos and high-resolution controlled images in the gallery acquired during enrollment makes the problem even harden In this paper, we extend the simultaneous tracking and recognition framework [22] to address the problem of matching high-resolution gallery images with surveillance quality probe videos. We propose using a learning-based likelihood measurement model to handle the large appearance and resolution difference between the gallery images and probe videos. The measurement model consists of a mapping which transforms the gallery and probe features to a space in which their inter-Euclidean distances approximate the distances that would have been obtained had all the descriptors been computed from good quality frontal images. Experimental results on real surveillance quality videos and comparisons with related approaches show the effectiveness of the proposed framework.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115044519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117544
Cunjian Chen, A. Ross
Automatic gender classification based on face images is receiving increased attention in the biometrics community. Most gender classification systems have been evaluated only on face images captured in the visible spectrum. In this work, the possibility of deducing gender from face images obtained in the near-infrared (NIR) and thermal (THM) spectra is established. It is observed that the use of local binary pattern histogram (LBPH) features along with discriminative classifiers results in reasonable gender classification accuracy in both the NIR and THM spectra. Further, the performance of human subjects in classifying thermal face images is studied. Experiments suggest that machine-learning methods are better suited than humans for gender classification from face images in the thermal spectrum.
{"title":"Evaluation of gender classification methods on thermal and near-infrared face images","authors":"Cunjian Chen, A. Ross","doi":"10.1109/IJCB.2011.6117544","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117544","url":null,"abstract":"Automatic gender classification based on face images is receiving increased attention in the biometrics community. Most gender classification systems have been evaluated only on face images captured in the visible spectrum. In this work, the possibility of deducing gender from face images obtained in the near-infrared (NIR) and thermal (THM) spectra is established. It is observed that the use of local binary pattern histogram (LBPH) features along with discriminative classifiers results in reasonable gender classification accuracy in both the NIR and THM spectra. Further, the performance of human subjects in classifying thermal face images is studied. Experiments suggest that machine-learning methods are better suited than humans for gender classification from face images in the thermal spectrum.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121308420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117525
A. Sankaran, Tejas I. Dhamecha, Mayank Vatsa, Richa Singh
This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are presented to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.
{"title":"On matching latent to latent fingerprints","authors":"A. Sankaran, Tejas I. Dhamecha, Mayank Vatsa, Richa Singh","doi":"10.1109/IJCB.2011.6117525","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117525","url":null,"abstract":"This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are presented to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123414316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117485
Yilin Li, Baochang Zhang, Yao Cao, Sanqiang Zhao, Yongsheng Gao, Jianzhuang Liu
This paper introduces a new BeiHang (BH) Keystroke Dynamics Database for testing and evaluation of biometric approaches. Different from the existing keystroke dynamics researches which solely rely on laboratory experiments, the developed database is collected from a real commercialized system and thus is more comprehensive and more faithful to human behavior. Moreover, our database comes with ready-to-use benchmark results of three keystroke dynamics methods, Nearest Neighbor classifier, Gaussian Model and One-Class Support Vector Machine. Both the database and benchmark results are open to the public and provide a significant experimental platform for international researchers in the keystroke dynamics area.
{"title":"Study on the BeiHang Keystroke Dynamics Database","authors":"Yilin Li, Baochang Zhang, Yao Cao, Sanqiang Zhao, Yongsheng Gao, Jianzhuang Liu","doi":"10.1109/IJCB.2011.6117485","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117485","url":null,"abstract":"This paper introduces a new BeiHang (BH) Keystroke Dynamics Database for testing and evaluation of biometric approaches. Different from the existing keystroke dynamics researches which solely rely on laboratory experiments, the developed database is collected from a real commercialized system and thus is more comprehensive and more faithful to human behavior. Moreover, our database comes with ready-to-use benchmark results of three keystroke dynamics methods, Nearest Neighbor classifier, Gaussian Model and One-Class Support Vector Machine. Both the database and benchmark results are open to the public and provide a significant experimental platform for international researchers in the keystroke dynamics area.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125020835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117532
Yasushi Makihara, D. Muramatsu, Y. Yagi, Md. Altab Hossain
This paper describes a method of score-level fusion to optimize a Receiver Operating Characteristic (ROC) curve for multimodal biometrics. When the Probability Density Functions (PDFs) of the multimodal scores for each client and imposter are obtained from the training samples, it is well known that the isolines of a function of probabilistic densities, such as the likelihood ratio, posterior, or Bayes error gradient, give the optimal ROC curve. The success of the probability density-based methods depends on the PDF estimation for each client and imposter, which still remains a challenging problem. Therefore, we introduce a framework of direct estimation of the Bayes error gradient that bypasses the troublesome PDF estimation for each client and imposter. The lattice-type control points are allocated in a multiple score space, and the Bayes error gradients on the control points are then estimated in a comprehensive manner in the energy minimization framework including not only the data fitness of the training samples but also the boundary conditions and monotonic increase constraints to suppress the over-training. The experimental results for both simulation and real public data show the effectiveness of the proposed method.
{"title":"Score-level fusion based on the direct estimation of the Bayes error gradient distribution","authors":"Yasushi Makihara, D. Muramatsu, Y. Yagi, Md. Altab Hossain","doi":"10.1109/IJCB.2011.6117532","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117532","url":null,"abstract":"This paper describes a method of score-level fusion to optimize a Receiver Operating Characteristic (ROC) curve for multimodal biometrics. When the Probability Density Functions (PDFs) of the multimodal scores for each client and imposter are obtained from the training samples, it is well known that the isolines of a function of probabilistic densities, such as the likelihood ratio, posterior, or Bayes error gradient, give the optimal ROC curve. The success of the probability density-based methods depends on the PDF estimation for each client and imposter, which still remains a challenging problem. Therefore, we introduce a framework of direct estimation of the Bayes error gradient that bypasses the troublesome PDF estimation for each client and imposter. The lattice-type control points are allocated in a multiple score space, and the Bayes error gradients on the control points are then estimated in a comprehensive manner in the energy minimization framework including not only the data fitness of the training samples but also the boundary conditions and monotonic increase constraints to suppress the over-training. The experimental results for both simulation and real public data show the effectiveness of the proposed method.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128378777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117553
K. Inthavisas, D. Lopresti
In this paper, we propose a way to combine a password with a speech biometric cryptosystem. We present two schemes to enhance verification performance in a biometric cryptosystem using password. Both can resist a password brute-force search if biometrics are not compromised. Even if the biometrics are compromised, attackers have to spend many more attempts in searching for cryptographic keys when we compare ours with a traditional password-based approach. In addition, the experimental results show that the verification performance is significantly improved.
{"title":"Speech cryptographic key regeneration based on password","authors":"K. Inthavisas, D. Lopresti","doi":"10.1109/IJCB.2011.6117553","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117553","url":null,"abstract":"In this paper, we propose a way to combine a password with a speech biometric cryptosystem. We present two schemes to enhance verification performance in a biometric cryptosystem using password. Both can resist a password brute-force search if biometrics are not compromised. Even if the biometrics are compromised, attackers have to spend many more attempts in searching for cryptographic keys when we compare ours with a traditional password-based approach. In addition, the experimental results show that the verification performance is significantly improved.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128555340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117555
Huibin Li, Di Huang, J. Morvan, Liming Chen
This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition.
{"title":"Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition","authors":"Huibin Li, Di Huang, J. Morvan, Liming Chen","doi":"10.1109/IJCB.2011.6117555","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117555","url":null,"abstract":"This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128692460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117538
Ngoc-Son Vu, A. Caplier
Good face recognition system is one which quickly delivers high accurate results to the end user. For this purpose, face representation must be robust, discriminative and also of low computational cost in both terms of time and space. Inspired by recently proposed feature set so-called POEM (Patterns of Oriented Edge Magnitudes) which considers the relationships between edge distributions of different image patches and is argued balancing well the three concerns, this work proposes to further exploit patterns of both orientations and magnitudes for building more efficient algorithm. We first present novel features called Patterns of Dominant Orientations (PDO) which consider the relationships between “dominant” orientations of local image regions at different scales. We also propose to apply the whitened PCA technique upon both the POEM and PDO based representations to get more compact and discriminative face descriptors. We then show that the two methods have complementary strength and that by combining the two descriptors, one obtains stronger results than either of them considered separately. By experiments carried out on several common benchmarks, including both frontal and non-frontal FERET as well as the AR datasets, we prove that our approach is more efficient than contemporary ones.
好的人脸识别系统是一个能够快速向最终用户提供高精度结果的系统。为此,人脸表征必须具有鲁棒性、判别性,并且在时间和空间上都具有较低的计算成本。受最近提出的所谓POEM (Patterns of Oriented Edge Magnitudes)特征集的启发,该特征集考虑了不同图像补丁边缘分布之间的关系,并认为这三个关注点很好地平衡了,本工作提出进一步利用方向和大小的模式来构建更有效的算法。我们首先提出了一种新的特征,称为优势取向模式(PDO),它考虑了不同尺度下局部图像区域的“优势”取向之间的关系。我们还建议将白化PCA技术应用于基于POEM和PDO的表示,以获得更紧凑和判别性更好的人脸描述符。然后,我们证明这两种方法具有互补的强度,并且通过结合两个描述符,可以获得比单独考虑的任何一种更强的结果。通过在几个常见基准上进行的实验,包括正面和非正面FERET以及AR数据集,我们证明了我们的方法比现代方法更有效。
{"title":"Mining patterns of orientations and magnitudes for face recognition","authors":"Ngoc-Son Vu, A. Caplier","doi":"10.1109/IJCB.2011.6117538","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117538","url":null,"abstract":"Good face recognition system is one which quickly delivers high accurate results to the end user. For this purpose, face representation must be robust, discriminative and also of low computational cost in both terms of time and space. Inspired by recently proposed feature set so-called POEM (Patterns of Oriented Edge Magnitudes) which considers the relationships between edge distributions of different image patches and is argued balancing well the three concerns, this work proposes to further exploit patterns of both orientations and magnitudes for building more efficient algorithm. We first present novel features called Patterns of Dominant Orientations (PDO) which consider the relationships between “dominant” orientations of local image regions at different scales. We also propose to apply the whitened PCA technique upon both the POEM and PDO based representations to get more compact and discriminative face descriptors. We then show that the two methods have complementary strength and that by combining the two descriptors, one obtains stronger results than either of them considered separately. By experiments carried out on several common benchmarks, including both frontal and non-frontal FERET as well as the AR datasets, we prove that our approach is more efficient than contemporary ones.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125936443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117511
Yujie Dong, D. Woodard
A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification.
{"title":"Eyebrow shape-based features for biometric recognition and gender classification: A feasibility study","authors":"Yujie Dong, D. Woodard","doi":"10.1109/IJCB.2011.6117511","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117511","url":null,"abstract":"A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}