Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117516
O. K. Manyam, Neeraj Kumar, P. Belhumeur, D. Kriegman
Face recognition systems classically recognize people individually. When presented with a group photograph containing multiple people, such systems implicitly assume statistical independence between each detected face. We question this basic assumption and consider instead that there is a dependence between face regions from the same image; after all, the image was acquired with a single camera, under consistent lighting (distribution, direction, spectrum), camera motion, and scene/camera geometry. Such naturally occurring commonalities between face images can be exploited when recognition decisions are made jointly across the faces, rather than independently. Furthermore, when recognizing people in isolation, some features such as color are usually uninformative in unconstrained settings. But by considering pairs of people, the relative color difference provides valuable information. This paper reconsiders the independence assumption, introduces new features and methods for recognizing pairs of individuals in group photographs, and demonstrates a marked improvement when these features are used in joint decision making vs. independent decision making. While these features alone are only moderately discriminative, we combine these new features with state-of-art attribute features and demonstrate effective recognition performance. Initial experiments on two datasets show promising improvements in accuracy.
{"title":"Two faces are better than one: Face recognition in group photographs","authors":"O. K. Manyam, Neeraj Kumar, P. Belhumeur, D. Kriegman","doi":"10.1109/IJCB.2011.6117516","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117516","url":null,"abstract":"Face recognition systems classically recognize people individually. When presented with a group photograph containing multiple people, such systems implicitly assume statistical independence between each detected face. We question this basic assumption and consider instead that there is a dependence between face regions from the same image; after all, the image was acquired with a single camera, under consistent lighting (distribution, direction, spectrum), camera motion, and scene/camera geometry. Such naturally occurring commonalities between face images can be exploited when recognition decisions are made jointly across the faces, rather than independently. Furthermore, when recognizing people in isolation, some features such as color are usually uninformative in unconstrained settings. But by considering pairs of people, the relative color difference provides valuable information. This paper reconsiders the independence assumption, introduces new features and methods for recognizing pairs of individuals in group photographs, and demonstrates a marked improvement when these features are used in joint decision making vs. independent decision making. While these features alone are only moderately discriminative, we combine these new features with state-of-art attribute features and demonstrate effective recognition performance. Initial experiments on two datasets show promising improvements in accuracy.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125100639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117514
S. Biswas, G. Aggarwal, P. Flynn
Low-resolution surveillance videos with uncontrolled pose and illumination present a significant challenge to both face tracking and recognition algorithms. Considerable appearance difference between the probe videos and high-resolution controlled images in the gallery acquired during enrollment makes the problem even harden In this paper, we extend the simultaneous tracking and recognition framework [22] to address the problem of matching high-resolution gallery images with surveillance quality probe videos. We propose using a learning-based likelihood measurement model to handle the large appearance and resolution difference between the gallery images and probe videos. The measurement model consists of a mapping which transforms the gallery and probe features to a space in which their inter-Euclidean distances approximate the distances that would have been obtained had all the descriptors been computed from good quality frontal images. Experimental results on real surveillance quality videos and comparisons with related approaches show the effectiveness of the proposed framework.
{"title":"Face recognition in low-resolution videos using learning-based likelihood measurement model","authors":"S. Biswas, G. Aggarwal, P. Flynn","doi":"10.1109/IJCB.2011.6117514","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117514","url":null,"abstract":"Low-resolution surveillance videos with uncontrolled pose and illumination present a significant challenge to both face tracking and recognition algorithms. Considerable appearance difference between the probe videos and high-resolution controlled images in the gallery acquired during enrollment makes the problem even harden In this paper, we extend the simultaneous tracking and recognition framework [22] to address the problem of matching high-resolution gallery images with surveillance quality probe videos. We propose using a learning-based likelihood measurement model to handle the large appearance and resolution difference between the gallery images and probe videos. The measurement model consists of a mapping which transforms the gallery and probe features to a space in which their inter-Euclidean distances approximate the distances that would have been obtained had all the descriptors been computed from good quality frontal images. Experimental results on real surveillance quality videos and comparisons with related approaches show the effectiveness of the proposed framework.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115044519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117546
Ruifang Wang, D. Ramos, Julian Fierrez
In forensic applications the evidential value of palmprints is obvious according to surveys of law enforcement agencies which indicate that 30 percent of the latents recovered from crime scenes are from palms. Consequently, developing forensic automatic palmprint identification technology is an urgent and challenging task which deals with latent (i.e., partial) and full palmprints captured or recovered at 500 ppi at least (the current standard in forensic applications) for minutiae-based offline recognition. Moreover, a rigorous quantification of the evidential value of biometrics, such as fingerprints and palmprints, is essential in modern forensic science. Recently, radial triangulation has been proposed as a step towards this objective in fingerprints, using minutiae manually extracted by experts. In this work we help in automatizing such comparison strategy, and generalize it to palmprints. Firstly, palmprint segmentation and enhancement are implemented for full prints feature extraction by a commercial biometric SDK in an automatic way, while features of latent prints are manually extracted by forensic experts. Then a latent-to-full palmprint comparison algorithm based on radial triangulation is proposed, in which radial triangulation is utilized for minutiae modeling. Finally, 22 latent palmprints from real forensic cases and 8680 full palmprints from criminal investigation field are used for performance evaluation. Experimental results proof the usability and efficiency of the proposed system, i.e, rank-1 identification rate of 62% is achieved despite the inherent difficulty of latent-to-full
{"title":"Latent-to-full palmprint comparison based on radial triangulation under forensic conditions","authors":"Ruifang Wang, D. Ramos, Julian Fierrez","doi":"10.1109/IJCB.2011.6117546","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117546","url":null,"abstract":"In forensic applications the evidential value of palmprints is obvious according to surveys of law enforcement agencies which indicate that 30 percent of the latents recovered from crime scenes are from palms. Consequently, developing forensic automatic palmprint identification technology is an urgent and challenging task which deals with latent (i.e., partial) and full palmprints captured or recovered at 500 ppi at least (the current standard in forensic applications) for minutiae-based offline recognition. Moreover, a rigorous quantification of the evidential value of biometrics, such as fingerprints and palmprints, is essential in modern forensic science. Recently, radial triangulation has been proposed as a step towards this objective in fingerprints, using minutiae manually extracted by experts. In this work we help in automatizing such comparison strategy, and generalize it to palmprints. Firstly, palmprint segmentation and enhancement are implemented for full prints feature extraction by a commercial biometric SDK in an automatic way, while features of latent prints are manually extracted by forensic experts. Then a latent-to-full palmprint comparison algorithm based on radial triangulation is proposed, in which radial triangulation is utilized for minutiae modeling. Finally, 22 latent palmprints from real forensic cases and 8680 full palmprints from criminal investigation field are used for performance evaluation. Experimental results proof the usability and efficiency of the proposed system, i.e, rank-1 identification rate of 62% is achieved despite the inherent difficulty of latent-to-full","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122215443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117525
A. Sankaran, Tejas I. Dhamecha, Mayank Vatsa, Richa Singh
This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are presented to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.
{"title":"On matching latent to latent fingerprints","authors":"A. Sankaran, Tejas I. Dhamecha, Mayank Vatsa, Richa Singh","doi":"10.1109/IJCB.2011.6117525","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117525","url":null,"abstract":"This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are presented to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123414316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117544
Cunjian Chen, A. Ross
Automatic gender classification based on face images is receiving increased attention in the biometrics community. Most gender classification systems have been evaluated only on face images captured in the visible spectrum. In this work, the possibility of deducing gender from face images obtained in the near-infrared (NIR) and thermal (THM) spectra is established. It is observed that the use of local binary pattern histogram (LBPH) features along with discriminative classifiers results in reasonable gender classification accuracy in both the NIR and THM spectra. Further, the performance of human subjects in classifying thermal face images is studied. Experiments suggest that machine-learning methods are better suited than humans for gender classification from face images in the thermal spectrum.
{"title":"Evaluation of gender classification methods on thermal and near-infrared face images","authors":"Cunjian Chen, A. Ross","doi":"10.1109/IJCB.2011.6117544","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117544","url":null,"abstract":"Automatic gender classification based on face images is receiving increased attention in the biometrics community. Most gender classification systems have been evaluated only on face images captured in the visible spectrum. In this work, the possibility of deducing gender from face images obtained in the near-infrared (NIR) and thermal (THM) spectra is established. It is observed that the use of local binary pattern histogram (LBPH) features along with discriminative classifiers results in reasonable gender classification accuracy in both the NIR and THM spectra. Further, the performance of human subjects in classifying thermal face images is studied. Experiments suggest that machine-learning methods are better suited than humans for gender classification from face images in the thermal spectrum.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121308420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117511
Yujie Dong, D. Woodard
A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification.
{"title":"Eyebrow shape-based features for biometric recognition and gender classification: A feasibility study","authors":"Yujie Dong, D. Woodard","doi":"10.1109/IJCB.2011.6117511","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117511","url":null,"abstract":"A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117548
Brendan Klare, A. Paulino, Anil K. Jain
A study of the distinctiveness of different facial features (MLBP, SIFT, and facial marks) with respect to distinguishing identical twins is presented. The accuracy of distinguishing between identical twin pairs is measured using the entire face, as well as each facial component (eyes, eyebrows, nose, and mouth). The impact of discriminant learning methods on twin face recognition is investigated. Experimental results indicate that features that perform well in distinguishing identical twins are not always consistent with the features that best distinguish two non-twin faces.
{"title":"Analysis of facial features in identical twins","authors":"Brendan Klare, A. Paulino, Anil K. Jain","doi":"10.1109/IJCB.2011.6117548","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117548","url":null,"abstract":"A study of the distinctiveness of different facial features (MLBP, SIFT, and facial marks) with respect to distinguishing identical twins is presented. The accuracy of distinguishing between identical twin pairs is measured using the entire face, as well as each facial component (eyes, eyebrows, nose, and mouth). The impact of discriminant learning methods on twin face recognition is investigated. Experimental results indicate that features that perform well in distinguishing identical twins are not always consistent with the features that best distinguish two non-twin faces.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121859538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117538
Ngoc-Son Vu, A. Caplier
Good face recognition system is one which quickly delivers high accurate results to the end user. For this purpose, face representation must be robust, discriminative and also of low computational cost in both terms of time and space. Inspired by recently proposed feature set so-called POEM (Patterns of Oriented Edge Magnitudes) which considers the relationships between edge distributions of different image patches and is argued balancing well the three concerns, this work proposes to further exploit patterns of both orientations and magnitudes for building more efficient algorithm. We first present novel features called Patterns of Dominant Orientations (PDO) which consider the relationships between “dominant” orientations of local image regions at different scales. We also propose to apply the whitened PCA technique upon both the POEM and PDO based representations to get more compact and discriminative face descriptors. We then show that the two methods have complementary strength and that by combining the two descriptors, one obtains stronger results than either of them considered separately. By experiments carried out on several common benchmarks, including both frontal and non-frontal FERET as well as the AR datasets, we prove that our approach is more efficient than contemporary ones.
好的人脸识别系统是一个能够快速向最终用户提供高精度结果的系统。为此,人脸表征必须具有鲁棒性、判别性,并且在时间和空间上都具有较低的计算成本。受最近提出的所谓POEM (Patterns of Oriented Edge Magnitudes)特征集的启发,该特征集考虑了不同图像补丁边缘分布之间的关系,并认为这三个关注点很好地平衡了,本工作提出进一步利用方向和大小的模式来构建更有效的算法。我们首先提出了一种新的特征,称为优势取向模式(PDO),它考虑了不同尺度下局部图像区域的“优势”取向之间的关系。我们还建议将白化PCA技术应用于基于POEM和PDO的表示,以获得更紧凑和判别性更好的人脸描述符。然后,我们证明这两种方法具有互补的强度,并且通过结合两个描述符,可以获得比单独考虑的任何一种更强的结果。通过在几个常见基准上进行的实验,包括正面和非正面FERET以及AR数据集,我们证明了我们的方法比现代方法更有效。
{"title":"Mining patterns of orientations and magnitudes for face recognition","authors":"Ngoc-Son Vu, A. Caplier","doi":"10.1109/IJCB.2011.6117538","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117538","url":null,"abstract":"Good face recognition system is one which quickly delivers high accurate results to the end user. For this purpose, face representation must be robust, discriminative and also of low computational cost in both terms of time and space. Inspired by recently proposed feature set so-called POEM (Patterns of Oriented Edge Magnitudes) which considers the relationships between edge distributions of different image patches and is argued balancing well the three concerns, this work proposes to further exploit patterns of both orientations and magnitudes for building more efficient algorithm. We first present novel features called Patterns of Dominant Orientations (PDO) which consider the relationships between “dominant” orientations of local image regions at different scales. We also propose to apply the whitened PCA technique upon both the POEM and PDO based representations to get more compact and discriminative face descriptors. We then show that the two methods have complementary strength and that by combining the two descriptors, one obtains stronger results than either of them considered separately. By experiments carried out on several common benchmarks, including both frontal and non-frontal FERET as well as the AR datasets, we prove that our approach is more efficient than contemporary ones.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125936443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117475
Rubisley de P. Lemes, O. Bellon, Luciano Silva, Anil K. Jain
We present some results on newborn identification through high-resolution images of palmar surfaces. To our knowledge, there is no biometric system currently available that can be effectively used for newborn identification. The manual procedure of capturing inked footprints in practice for this purpose is limited for use inside hospitals and is not an effective solution for identification purposes. The use of friction ridge patterns on the hands of newborns is challenging due to both the small size of newborn's papillary ridges, which are, on average, 2.5 to 3 times smaller than the ridges in adult fingerprints, and their fragility, making them amenable to deformation. The proposed palmprint based automatic system for newborn identification is relatively easy to use and shows the feasibility of this approach. Experiments were performed on images collected from 250 newborns at the University Hospital (Universidade Federal do Paraná). An image acquisition protocol was developed in order to collect suitable images. When considering the good quality palmar images, the results show that the proposed approach is promising.
我们提出了一些通过手掌表面的高分辨率图像识别新生儿的结果。据我们所知,目前还没有可以有效用于新生儿识别的生物识别系统。在实践中,为这一目的而手动捕获油墨足迹的程序仅限于在医院内部使用,并且不是用于识别目的的有效解决方案。在新生儿的手上使用摩擦脊图案是具有挑战性的,因为新生儿的乳头脊尺寸小,平均比成人指纹的脊小2.5到3倍,而且它们很脆弱,使它们易于变形。所提出的基于掌纹的新生儿自动识别系统使用相对简单,表明了该方法的可行性。实验是在大学医院(universsidade Federal do paranar)收集的250名新生儿的图像上进行的。为了采集合适的图像,开发了图像采集协议。当考虑到高质量的手掌图像时,结果表明该方法是有前途的。
{"title":"Biometric recognition of newborns: Identification using palmprints","authors":"Rubisley de P. Lemes, O. Bellon, Luciano Silva, Anil K. Jain","doi":"10.1109/IJCB.2011.6117475","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117475","url":null,"abstract":"We present some results on newborn identification through high-resolution images of palmar surfaces. To our knowledge, there is no biometric system currently available that can be effectively used for newborn identification. The manual procedure of capturing inked footprints in practice for this purpose is limited for use inside hospitals and is not an effective solution for identification purposes. The use of friction ridge patterns on the hands of newborns is challenging due to both the small size of newborn's papillary ridges, which are, on average, 2.5 to 3 times smaller than the ridges in adult fingerprints, and their fragility, making them amenable to deformation. The proposed palmprint based automatic system for newborn identification is relatively easy to use and shows the feasibility of this approach. Experiments were performed on images collected from 250 newborns at the University Hospital (Universidade Federal do Paraná). An image acquisition protocol was developed in order to collect suitable images. When considering the good quality palmar images, the results show that the proposed approach is promising.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114299174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117555
Huibin Li, Di Huang, J. Morvan, Liming Chen
This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition.
{"title":"Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition","authors":"Huibin Li, Di Huang, J. Morvan, Liming Chen","doi":"10.1109/IJCB.2011.6117555","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117555","url":null,"abstract":"This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128692460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}