Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117524
Tejas I. Dhamecha, A. Sankaran, Richa Singh, Mayank Vatsa
Over the years, automatic gender recognition has been used in many applications. However, limited research has been done on analyzing gender recognition across ethnicity scenario. This research aims at studying the performance of discriminant functions including Principal Component Analysis, Linear Discriminant Analysis and Subclass Discriminant Analysis with the availability of limited training database and unseen ethnicity variations. The experiments are performed on a heterogeneous database of 8112 images that includes variations in illumination, expression, minor pose and ethnicity. Contrary to existing literature, the results show that PCA provides comparable but slightly better performance compared to PCA+LDA, PCA+SDA and PCA+SVM. The results also suggest that linear discriminant functions provide good generalization capability even with limited number of training samples, principal components and with cross-ethnicity variations.
{"title":"Is gender classification across ethnicity feasible using discriminant functions?","authors":"Tejas I. Dhamecha, A. Sankaran, Richa Singh, Mayank Vatsa","doi":"10.1109/IJCB.2011.6117524","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117524","url":null,"abstract":"Over the years, automatic gender recognition has been used in many applications. However, limited research has been done on analyzing gender recognition across ethnicity scenario. This research aims at studying the performance of discriminant functions including Principal Component Analysis, Linear Discriminant Analysis and Subclass Discriminant Analysis with the availability of limited training database and unseen ethnicity variations. The experiments are performed on a heterogeneous database of 8112 images that includes variations in illumination, expression, minor pose and ethnicity. Contrary to existing literature, the results show that PCA provides comparable but slightly better performance compared to PCA+LDA, PCA+SDA and PCA+SVM. The results also suggest that linear discriminant functions provide good generalization capability even with limited number of training samples, principal components and with cross-ethnicity variations.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121192557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117510
Jukka Määttä, A. Hadid, M. Pietikäinen
Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterization of printing artifacts, and differences in light reflection, we propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture features. Hence, we present a novel approach based on analyzing facial image textures for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyzes the texture of the facial images using multi-scale local binary patterns (LBP). Compared to many previous works, our proposed approach is robust, computationally fast and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on a publicly available database showed excellent results compared to existing works.
{"title":"Face spoofing detection from single images using micro-texture analysis","authors":"Jukka Määttä, A. Hadid, M. Pietikäinen","doi":"10.1109/IJCB.2011.6117510","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117510","url":null,"abstract":"Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterization of printing artifacts, and differences in light reflection, we propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture features. Hence, we present a novel approach based on analyzing facial image textures for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyzes the texture of the facial images using multi-scale local binary patterns (LBP). Compared to many previous works, our proposed approach is robust, computationally fast and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on a publicly available database showed excellent results compared to existing works.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116331996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117535
C. Rathgeb, A. Uhl, Peter Wild
Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.
{"title":"Reliability-balanced feature level fusion for fuzzy commitment scheme","authors":"C. Rathgeb, A. Uhl, Peter Wild","doi":"10.1109/IJCB.2011.6117535","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117535","url":null,"abstract":"Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129141853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117588
B. Oh, K. Toh
This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.
{"title":"Fusion of structured projections for cancelable face identity verification","authors":"B. Oh, K. Toh","doi":"10.1109/IJCB.2011.6117588","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117588","url":null,"abstract":"This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117492
A. Roy, M. Magimai.-Doss, S. Marcel
In this work, we investigate a novel computationally efficient speaker verification (SV) system involving boosted ensembles of simple threshold-based classifiers. The system is based on a novel set of features called “slice features”. Both the system and the features were inspired by the recent success of pixel comparison-based ensemble approaches in the computer vision domain. The performance of the proposed system was evaluated through speaker verification experiments on the MOBIO corpus containing mobile phone speech, according to a challenging protocol. The system was found to perform reasonably well, compared to multiple state-of-the-art SV systems, with the benefit of significantly lower computational complexity. Its dual characteristics of good performance and computational efficiency could be important factors in the context of SV system implementation on portable devices like mobile phones.
{"title":"Fast speaker verification on mobile phone data using boosted slice classifiers","authors":"A. Roy, M. Magimai.-Doss, S. Marcel","doi":"10.1109/IJCB.2011.6117492","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117492","url":null,"abstract":"In this work, we investigate a novel computationally efficient speaker verification (SV) system involving boosted ensembles of simple threshold-based classifiers. The system is based on a novel set of features called “slice features”. Both the system and the features were inspired by the recent success of pixel comparison-based ensemble approaches in the computer vision domain. The performance of the proposed system was evaluated through speaker verification experiments on the MOBIO corpus containing mobile phone speech, according to a challenging protocol. The system was found to perform reasonably well, compared to multiple state-of-the-art SV systems, with the benefit of significantly lower computational complexity. Its dual characteristics of good performance and computational efficiency could be important factors in the context of SV system implementation on portable devices like mobile phones.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115902617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117531
Yasushi Makihara, Mayu Okumura, Haruyuki Iwama, Y. Yagi
This paper addresses gait-based age estimation using a large-scale whole-generation gait database. Previous work on gait-based age estimation evaluated their methods using databases that included only 170 subjects at most with a limited age variation, which was insufficient to statistically demonstrate the possibility of gait-based age estimation. Therefore, we first constructed a much larger whole-generation gait database which includes 1,728 subjects with ages ranging from 2 to 94 years. We then provided a baseline algorithm for gait-based age estimation implemented by Gaussian process regression, which has achieved successes in the face-based age estimation field, in conjunction with silhouette-based gait features such as an averaged silhouette (or Gait Energy Image) which has been used extensively in many gait recognition algorithms. Finally, experiments using the whole-generation gait database demonstrated the viability of gait-based age estimation.
{"title":"Gait-based age estimation using a whole-generation gait database","authors":"Yasushi Makihara, Mayu Okumura, Haruyuki Iwama, Y. Yagi","doi":"10.1109/IJCB.2011.6117531","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117531","url":null,"abstract":"This paper addresses gait-based age estimation using a large-scale whole-generation gait database. Previous work on gait-based age estimation evaluated their methods using databases that included only 170 subjects at most with a limited age variation, which was insufficient to statistically demonstrate the possibility of gait-based age estimation. Therefore, we first constructed a much larger whole-generation gait database which includes 1,728 subjects with ages ranging from 2 to 94 years. We then provided a baseline algorithm for gait-based age estimation implemented by Gaussian process regression, which has achieved successes in the face-based age estimation field, in conjunction with silhouette-based gait features such as an averaged silhouette (or Gait Energy Image) which has been used extensively in many gait recognition algorithms. Finally, experiments using the whole-generation gait database demonstrated the viability of gait-based age estimation.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"606 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116384619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117584
Mina I. S. Ibrahim, M. Nixon, S. Mahmoodi
We present an experimental study to demonstrate the effect of the time difference in image acquisition for gallery and probe on the performance of ear recognition. This experimental research is the first study on the time effect on ear biometrics. For the purpose of recognition, we convolve banana wavelets with an ear image and then apply local binary pattern on the convolved image. The histograms of the produced image are then used as features to describe an ear. A histogram intersection technique is then applied on the histograms of two ears to measure the ear similarity for the recognition purposes. We also use analysis of variance (ANOVA) to select features to identify the best banana wavelets for the recognition process. The experimental results show that the recognition rate is only slightly reduced by time. The average recognition rate of 98.5% is achieved for an eleven month-difference between gallery and probe on an un-occluded ear dataset of 1491 images of ears selected from Southampton University ear database.
{"title":"The effect of time on ear biometrics","authors":"Mina I. S. Ibrahim, M. Nixon, S. Mahmoodi","doi":"10.1109/IJCB.2011.6117584","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117584","url":null,"abstract":"We present an experimental study to demonstrate the effect of the time difference in image acquisition for gallery and probe on the performance of ear recognition. This experimental research is the first study on the time effect on ear biometrics. For the purpose of recognition, we convolve banana wavelets with an ear image and then apply local binary pattern on the convolved image. The histograms of the produced image are then used as features to describe an ear. A histogram intersection technique is then applied on the histograms of two ears to measure the ear similarity for the recognition purposes. We also use analysis of variance (ANOVA) to select features to identify the best banana wavelets for the recognition process. The experimental results show that the recognition rate is only slightly reduced by time. The average recognition rate of 98.5% is achieved for an eleven month-difference between gallery and probe on an un-occluded ear dataset of 1491 images of ears selected from Southampton University ear database.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116766256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117536
C. Holland, Oleg V. Komogortsev
This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye movements and their aggregated scanpath characteristics, including: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scanpath inflections, the amplitude-duration relationship, the main sequence relationship, and the pairwise distance between fixations. As well, an information fusion method for combining these metrics into a single identification algorithm is presented. With limited testing this method was able to identify subjects with an equal error rate of 27%. These results indicate that scanpath-based biometric identification holds promise as a behavioral biometric technique.
{"title":"Biometric identification via eye movement scanpaths in reading","authors":"C. Holland, Oleg V. Komogortsev","doi":"10.1109/IJCB.2011.6117536","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117536","url":null,"abstract":"This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye movements and their aggregated scanpath characteristics, including: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scanpath inflections, the amplitude-duration relationship, the main sequence relationship, and the pairwise distance between fixations. As well, an information fusion method for combining these metrics into a single identification algorithm is presented. With limited testing this method was able to identify subjects with an equal error rate of 27%. These results indicate that scanpath-based biometric identification holds promise as a behavioral biometric technique.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115204833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117547
Brendan Klare, Anil K. Jain
There is a growing interest in understanding the impact of aging on face recognition performance, as well as designing recognition algorithms that are mostly invariant to temporal changes. While some success has been made on this front, a fundamental questions has yet to be answered: do face recognition systems that compensate for the effects of aging compromise recognition performance for faces that have not undergone any aging? The studies in this paper help confirm that age invariant systems do seem to decrease performance in non-aging scenarios. This is demonstrated by performing training experiments on the largest face aging dataset studied in the literature to date (over 200,000 images from roughly 64,000 subjects). Further experiments conducted in this research help demonstrate the impact of aging on two leading commercial face recognition systems. We also determine the regions of the face that remain the most stable over time.
{"title":"Face recognition across time lapse: On learning feature subspaces","authors":"Brendan Klare, Anil K. Jain","doi":"10.1109/IJCB.2011.6117547","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117547","url":null,"abstract":"There is a growing interest in understanding the impact of aging on face recognition performance, as well as designing recognition algorithms that are mostly invariant to temporal changes. While some success has been made on this front, a fundamental questions has yet to be answered: do face recognition systems that compensate for the effects of aging compromise recognition performance for faces that have not undergone any aging? The studies in this paper help confirm that age invariant systems do seem to decrease performance in non-aging scenarios. This is demonstrated by performing training experiments on the largest face aging dataset studied in the literature to date (over 200,000 images from roughly 64,000 subjects). Further experiments conducted in this research help demonstrate the impact of aging on two leading commercial face recognition systems. We also determine the regions of the face that remain the most stable over time.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115444191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117541
Xin Zhong, Deping Yu, K. Foong, T. Sim, Y. Wong, Ho-Lun Cheng
A novel pose invariant 3D dental biometrics framework is proposed for human identification by matching dental plasters in this paper. Using 3D overcomes a number of key problems that plague 2D methods. As best as we can tell, our study is the first attempt at 3D dental biometrics. It includes a multi-scale feature extraction algorithm for extracting pose invariant feature points and a triplet-correspondence algorithm for pose estimation. Preliminary experimental result achieves 100% rank-1 accuracy by matching 7 postmortem (PM) samples against 100 ante-mortem (AM) samples. In addition, towards a fully automated 3D dental identification testing, the accuracy achieves 71.4% at rank-1 accuracy and 100% at rank-4 accuracy. Comparing with the existing algorithms, the feature point extraction algorithm and the triplet-correspondence algorithm are faster and more robust for pose estimation. In addition, the retrieval time for a single subject has been significantly reduced. Furthermore, we discover that the investigated dental features are discriminative and useful for identification. The high accuracy, fast retrieval speed and the facilitated identification process suggest that the developed 3D framework is more suitable for practical use in dental biometrics applications in the future. Finally, the limitations and future research directions are discussed.
{"title":"Towards automated pose invariant 3D dental biometrics","authors":"Xin Zhong, Deping Yu, K. Foong, T. Sim, Y. Wong, Ho-Lun Cheng","doi":"10.1109/IJCB.2011.6117541","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117541","url":null,"abstract":"A novel pose invariant 3D dental biometrics framework is proposed for human identification by matching dental plasters in this paper. Using 3D overcomes a number of key problems that plague 2D methods. As best as we can tell, our study is the first attempt at 3D dental biometrics. It includes a multi-scale feature extraction algorithm for extracting pose invariant feature points and a triplet-correspondence algorithm for pose estimation. Preliminary experimental result achieves 100% rank-1 accuracy by matching 7 postmortem (PM) samples against 100 ante-mortem (AM) samples. In addition, towards a fully automated 3D dental identification testing, the accuracy achieves 71.4% at rank-1 accuracy and 100% at rank-4 accuracy. Comparing with the existing algorithms, the feature point extraction algorithm and the triplet-correspondence algorithm are faster and more robust for pose estimation. In addition, the retrieval time for a single subject has been significantly reduced. Furthermore, we discover that the investigated dental features are discriminative and useful for identification. The high accuracy, fast retrieval speed and the facilitated identification process suggest that the developed 3D framework is more suitable for practical use in dental biometrics applications in the future. Finally, the limitations and future research directions are discussed.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126959126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}