Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117497
Hossein Nejati, T. Sim, E. M. Marroquín
Face sketches have been used in eyewitness testimonies for about a century. However, 30 years of research shows that current eyewitness testimony methods are highly unreliable. Nonetheless, current face sketch recognition algorithms assume that eyewitness sketches are reliable and highly similar to their respective target faces. As proven by psychological findings and a recent work on face sketch recognition, these assumptions are unrealistic and therefore, current algorithms cannot handle real world cases of eyewitness sketch recognition. In this paper, we address the eyewitness sketch recognition problem with a two-pronged approach. We propose a more reliable eyewitness testimony method, and an accompanying face sketch recognition method that accounts for realistic assumptions on sketch-photo similarities and individual eyewitness differences. In our eyewitness testimony method we first ask the eyewitness to directly draw a sketch of the target face, and provide some ancillary information about the target face. Then we build a drawing profile of the eyewitness by asking him/her to draw a set of face photos. This drawing profile implicitly contains the eyewitness' mental bias. In our face sketch recognition method we first correct the sketch for the eyewitness' bias using the drawing profile. Then we recognize the resulting sketch based on an optimized combination of the detected features and ancillary information. Experimental results show that our method is 12 times better than the leading competing method at Rank-1 accuracy, and 6 times better at Rank-10. Our method also maintains its superiority as gallery size increases.
{"title":"Do you see what i see? A more realistic eyewitness sketch recognition","authors":"Hossein Nejati, T. Sim, E. M. Marroquín","doi":"10.1109/IJCB.2011.6117497","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117497","url":null,"abstract":"Face sketches have been used in eyewitness testimonies for about a century. However, 30 years of research shows that current eyewitness testimony methods are highly unreliable. Nonetheless, current face sketch recognition algorithms assume that eyewitness sketches are reliable and highly similar to their respective target faces. As proven by psychological findings and a recent work on face sketch recognition, these assumptions are unrealistic and therefore, current algorithms cannot handle real world cases of eyewitness sketch recognition. In this paper, we address the eyewitness sketch recognition problem with a two-pronged approach. We propose a more reliable eyewitness testimony method, and an accompanying face sketch recognition method that accounts for realistic assumptions on sketch-photo similarities and individual eyewitness differences. In our eyewitness testimony method we first ask the eyewitness to directly draw a sketch of the target face, and provide some ancillary information about the target face. Then we build a drawing profile of the eyewitness by asking him/her to draw a set of face photos. This drawing profile implicitly contains the eyewitness' mental bias. In our face sketch recognition method we first correct the sketch for the eyewitness' bias using the drawing profile. Then we recognize the resulting sketch based on an optimized combination of the detected features and ancillary information. Experimental results show that our method is 12 times better than the leading competing method at Rank-1 accuracy, and 6 times better at Rank-10. Our method also maintains its superiority as gallery size increases.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121379940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117479
Mohammad Nayeem Teli, J. Beveridge, P. Phillips, G. Givens, D. Bolme, B. Draper
Several studies have shown the existence of biometric zoos. The premise is that in biometric systems people fall into distinct categories, labeled with animal names, indicating recognition difficulty. Different combinations of excessive false accepts or rejects correspond to labels such as: Goat, Lamb, Wolf, etc. Previous work on biometric zoos has investigated the existence of zoos for the results of an algorithm on a data set. This work investigates biometric zoos generalization across algorithms and data sets. For example, if a subject is a Goat for algorithm A on data set X, is that subject also a Goat for algorithm B on data set Y? This paper introduces a theoretical framework for generalizing biometric zoos. Based on our framework, we develop an experimental methodology for determining if biometric zoos generalize across algorithms and data sets, and we conduct a series of experiments to investigate the existence of zoos on two algorithms in FRVT 2006.
{"title":"Biometric zoos: Theory and experimental evidence","authors":"Mohammad Nayeem Teli, J. Beveridge, P. Phillips, G. Givens, D. Bolme, B. Draper","doi":"10.1109/IJCB.2011.6117479","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117479","url":null,"abstract":"Several studies have shown the existence of biometric zoos. The premise is that in biometric systems people fall into distinct categories, labeled with animal names, indicating recognition difficulty. Different combinations of excessive false accepts or rejects correspond to labels such as: Goat, Lamb, Wolf, etc. Previous work on biometric zoos has investigated the existence of zoos for the results of an algorithm on a data set. This work investigates biometric zoos generalization across algorithms and data sets. For example, if a subject is a Goat for algorithm A on data set X, is that subject also a Goat for algorithm B on data set Y? This paper introduces a theoretical framework for generalizing biometric zoos. Based on our framework, we develop an experimental methodology for determining if biometric zoos generalize across algorithms and data sets, and we conduct a series of experiments to investigate the existence of zoos on two algorithms in FRVT 2006.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124537633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117502
David Ahmedt-Aristizabal, E. Delgado-Trejos, J. Vargas-Bonilla, J. A. Jaramillo-Garzón
This paper presents a study of biometric identification using a methodology based on complexity measures. The identification system designed, implemented and evaluated uses nonlinear dynamic techniques such as Lempel-Ziv Complexity, the Largest Lyapunov Exponent, Hurst Exponent, Correlation Dimension, Shannon Entropy and Kolmogorov Entropy to characterize the process and capture the intrinsic dynamics of the user's signature. In the validation process 3 databases were used SVC, MCYT and our own (ITMMS-01) obtaining closed-set identification performances of 98.12%, 97.38% and 99.50% accordingly. Satisfactory results were achieved with a conventional linear classifier spending a minimum computational cost.
{"title":"Dynamic signature for a closed-set identification based on nonlinear analysis","authors":"David Ahmedt-Aristizabal, E. Delgado-Trejos, J. Vargas-Bonilla, J. A. Jaramillo-Garzón","doi":"10.1109/IJCB.2011.6117502","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117502","url":null,"abstract":"This paper presents a study of biometric identification using a methodology based on complexity measures. The identification system designed, implemented and evaluated uses nonlinear dynamic techniques such as Lempel-Ziv Complexity, the Largest Lyapunov Exponent, Hurst Exponent, Correlation Dimension, Shannon Entropy and Kolmogorov Entropy to characterize the process and capture the intrinsic dynamics of the user's signature. In the validation process 3 databases were used SVC, MCYT and our own (ITMMS-01) obtaining closed-set identification performances of 98.12%, 97.38% and 99.50% accordingly. Satisfactory results were achieved with a conventional linear classifier spending a minimum computational cost.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134434088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117537
Kai Cao, Eryun Liu, Liaojun Pang, Jimin Liang, Jie Tian
Traditional minutiae matching algorithms assume that each minutia has the same discriminability. However, this assumption is challenged by at least two facts. One of them is that fingerprint minutiae tend to form clusters, and minutiae points that are spatially close tend to have similar directions with each other. When two different fingerprints have similar clusters, there may be many well matched minutiae. The other one is that false minutiae may be extracted due to low quality fingerprint images, which result in both high false acceptance rate and high false rejection rate. In this paper, we analyze the minutiae discriminability from the viewpoint of global spatial distribution and local quality. Firstly, we propose an effective approach to detect such cluster minutiae which of low discriminability, and reduce corresponding minutiae similarity. Secondly, we use minutiae and their neighbors to estimate minutia quality and incorporate it into minutiae similarity calculation. Experimental results over FVC2004 and FVC-onGoing demonstrate that the proposed approaches are effective to improve matching performance.
{"title":"Fingerprint matching by incorporating minutiae discriminability","authors":"Kai Cao, Eryun Liu, Liaojun Pang, Jimin Liang, Jie Tian","doi":"10.1109/IJCB.2011.6117537","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117537","url":null,"abstract":"Traditional minutiae matching algorithms assume that each minutia has the same discriminability. However, this assumption is challenged by at least two facts. One of them is that fingerprint minutiae tend to form clusters, and minutiae points that are spatially close tend to have similar directions with each other. When two different fingerprints have similar clusters, there may be many well matched minutiae. The other one is that false minutiae may be extracted due to low quality fingerprint images, which result in both high false acceptance rate and high false rejection rate. In this paper, we analyze the minutiae discriminability from the viewpoint of global spatial distribution and local quality. Firstly, we propose an effective approach to detect such cluster minutiae which of low discriminability, and reduce corresponding minutiae similarity. Secondly, we use minutiae and their neighbors to estimate minutia quality and incorporate it into minutiae similarity calculation. Experimental results over FVC2004 and FVC-onGoing demonstrate that the proposed approaches are effective to improve matching performance.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129396362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117599
R. Wallace, Mitchell McLaren, C. McCool, S. Marcel
This paper applies inter-session variability modelling and joint factor analysis to face authentication using Gaussian mixture models. These techniques, originally developed for speaker authentication, aim to explicitly model and remove detrimental within-client (inter-session) variation from client models. We apply the techniques to face authentication on the publicly-available BANCA, SCface and MOBIO databases. We propose a face authentication protocol for the challenging SCface database, and provide the first results on the MOBIO still face protocol. The techniques provide relative reductions in error rate of up to 44%, using only limited training data. On the BANCA database, our results represent a 31% reduction in error rate when benchmarked against previous work.
{"title":"Inter-session variability modelling and joint factor analysis for face authentication","authors":"R. Wallace, Mitchell McLaren, C. McCool, S. Marcel","doi":"10.1109/IJCB.2011.6117599","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117599","url":null,"abstract":"This paper applies inter-session variability modelling and joint factor analysis to face authentication using Gaussian mixture models. These techniques, originally developed for speaker authentication, aim to explicitly model and remove detrimental within-client (inter-session) variation from client models. We apply the techniques to face authentication on the publicly-available BANCA, SCface and MOBIO databases. We propose a face authentication protocol for the challenging SCface database, and provide the first results on the MOBIO still face protocol. The techniques provide relative reductions in error rate of up to 44%, using only limited training data. On the BANCA database, our results represent a 31% reduction in error rate when benchmarked against previous work.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"73 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131175458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117517
Xiaobo Zhang, Zhenan Sun, T. Tan
Local descriptor based image representation is widely used in biometrics and has achieved promising results. We usually extract the most distinctive local descriptors for image sparse representation due to the large feature space and the redundancy among local descriptors. In this paper, we describe the local descriptor based image representation via a graph model, in which each node is a local descriptor (we call it “atom”) and the edges denote the relationship between atoms. Based on this model, a hierarchical structure is constructed to select the most distinctive local descriptors. Two-layer structure is adopted in our work, including local selection and global selection. In the first layer, L1/Lq regularized least square regression is adopted to reduce the redundancy of local descriptors in local regions. In the second layer, AdaBoost learning is performed for local descriptor selection based on the results of the first layer. We apply this method to long-range personal identification by using binocular regions. Our method can select the distinctive local descriptors and reduce the redundancy among them, and achieve encouraging results on the collected binocular database and CASIA-Iris-Distance. Particularly, our method is about 50 times faster than the traditional AdaBoost learning based method in the experiments.
{"title":"Graph modeling based local descriptor selection via a hierarchical structure for biometric recognition","authors":"Xiaobo Zhang, Zhenan Sun, T. Tan","doi":"10.1109/IJCB.2011.6117517","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117517","url":null,"abstract":"Local descriptor based image representation is widely used in biometrics and has achieved promising results. We usually extract the most distinctive local descriptors for image sparse representation due to the large feature space and the redundancy among local descriptors. In this paper, we describe the local descriptor based image representation via a graph model, in which each node is a local descriptor (we call it “atom”) and the edges denote the relationship between atoms. Based on this model, a hierarchical structure is constructed to select the most distinctive local descriptors. Two-layer structure is adopted in our work, including local selection and global selection. In the first layer, L1/Lq regularized least square regression is adopted to reduce the redundancy of local descriptors in local regions. In the second layer, AdaBoost learning is performed for local descriptor selection based on the results of the first layer. We apply this method to long-range personal identification by using binocular regions. Our method can select the distinctive local descriptors and reduce the redundancy among them, and achieve encouraging results on the collected binocular database and CASIA-Iris-Distance. Particularly, our method is about 50 times faster than the traditional AdaBoost learning based method in the experiments.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133428669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117592
W. R. Schwartz, A. Rocha, H. Pedrini
Personal identity verification based on biometrics has received increasing attention since it allows reliable authentication through intrinsic characteristics, such as face, voice, iris, fingerprint, and gait. Particularly, face recognition techniques have been used in a number of applications, such as security surveillance, access control, crime solving, law enforcement, among others. To strengthen the results of verification, biometric systems must be robust against spoofing attempts with photographs or videos, which are two common ways of bypassing a face recognition system. In this paper, we describe an anti-spoofing solution based on a set of low-level feature descriptors capable of distinguishing between ‘live’ and ‘spoof’ images and videos. The proposed method explores both spatial and temporal information to learn distinctive characteristics between the two classes. Experiments conducted to validate our solution with datasets containing images and videos show results comparable to state-of-the-art approaches.
{"title":"Face spoofing detection through partial least squares and low-level descriptors","authors":"W. R. Schwartz, A. Rocha, H. Pedrini","doi":"10.1109/IJCB.2011.6117592","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117592","url":null,"abstract":"Personal identity verification based on biometrics has received increasing attention since it allows reliable authentication through intrinsic characteristics, such as face, voice, iris, fingerprint, and gait. Particularly, face recognition techniques have been used in a number of applications, such as security surveillance, access control, crime solving, law enforcement, among others. To strengthen the results of verification, biometric systems must be robust against spoofing attempts with photographs or videos, which are two common ways of bypassing a face recognition system. In this paper, we describe an anti-spoofing solution based on a set of low-level feature descriptors capable of distinguishing between ‘live’ and ‘spoof’ images and videos. The proposed method explores both spatial and temporal information to learn distinctive characteristics between the two classes. Experiments conducted to validate our solution with datasets containing images and videos show results comparable to state-of-the-art approaches.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116946097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117473
M. Yilmaz, B. Yanikoglu, C. Tirkaz, A. Kholmatov
We present an offline signature verification system based on a signature's local histogram features. The signature is divided into zones using both the Cartesian and polar coordinate systems and two different histogram features are calculated for each zone: histogram of oriented gradients (HOG) and histogram of local binary patterns (LBP).
{"title":"Offline signature verification using classifier combination of HOG and LBP features","authors":"M. Yilmaz, B. Yanikoglu, C. Tirkaz, A. Kholmatov","doi":"10.1109/IJCB.2011.6117473","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117473","url":null,"abstract":"We present an offline signature verification system based on a signature's local histogram features. The signature is divided into zones using both the Cartesian and polar coordinate systems and two different histogram features are calculated for each zone: histogram of oriented gradients (HOG) and histogram of local binary patterns (LBP).","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125944598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117530
Naoki Akae, Yasushi Makihara, Y. Yagi
This paper describes a method of gait recognition where both a gallery and a probe are based on low frame-rate videos. The sparsity of phases (stances) per gait period makes it much harder to match the gait using existing gait recognition algorithms. Consequently, we introduce a super resolution technique to generate a high frame-rate periodic image sequence as a preprocess to matching. First, the initial phase for each frame is estimated based on an exemplar of a high frame-rate gait image sequence. Images between a pair of adjacent frames sorted by the estimated phases are then filled using a morphing technique to avoid ghosting effects. Next, a manifold of the periodic gait image sequence is reconstructed based on the estimated phase and morphed images. Finally, the phase estimation and manifold reconstruction are iterated to generate better high frame-rate images in the energy minimization framework. Experiments with real data on 100 subjects demonstrate the effectiveness of the proposed method particularly for low frame-rate videos of less than 5 fps.
{"title":"Gait recognition using periodic temporal super resolution for low frame-rate videos","authors":"Naoki Akae, Yasushi Makihara, Y. Yagi","doi":"10.1109/IJCB.2011.6117530","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117530","url":null,"abstract":"This paper describes a method of gait recognition where both a gallery and a probe are based on low frame-rate videos. The sparsity of phases (stances) per gait period makes it much harder to match the gait using existing gait recognition algorithms. Consequently, we introduce a super resolution technique to generate a high frame-rate periodic image sequence as a preprocess to matching. First, the initial phase for each frame is estimated based on an exemplar of a high frame-rate gait image sequence. Images between a pair of adjacent frames sorted by the estimated phases are then filled using a morphing technique to avoid ghosting effects. Next, a manifold of the periodic gait image sequence is reconstructed based on the estimated phase and morphed images. Finally, the phase estimation and manifold reconstruction are iterated to generate better high frame-rate images in the energy minimization framework. Experiments with real data on 100 subjects demonstrate the effectiveness of the proposed method particularly for low frame-rate videos of less than 5 fps.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124881973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117590
Guangpeng Zhang, Yunhong Wang
Gender is an important demographic attribute of human beings, automatic face based gender classification has promising applications in various fields. Previous methods mainly deal with frontal face images, which in many cases can not be easily obtained. In contrast, we concentrate on gender classification based on face profiles and ear images in this paper. Hierarchical and discriminative bag of features technique is proposed to extract powerful features which are classified by support vector classification (SVC) with histogram intersection kernel. With the output of SVC, fusion of multi-modalities is performed at the score level based on Bayesian analysis to improve the accuracy. Experiments are conducted using texture images of the UND biometrics data sets Collection F, and average classification accuracy of 97.65% is achieved, which is comparable to the state of the art. Our work can be used in cooperate with existing frontal face based methods for accurate multi-view gender classification.
{"title":"Hierarchical and discriminative bag of features for face profile and ear based gender classification","authors":"Guangpeng Zhang, Yunhong Wang","doi":"10.1109/IJCB.2011.6117590","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117590","url":null,"abstract":"Gender is an important demographic attribute of human beings, automatic face based gender classification has promising applications in various fields. Previous methods mainly deal with frontal face images, which in many cases can not be easily obtained. In contrast, we concentrate on gender classification based on face profiles and ear images in this paper. Hierarchical and discriminative bag of features technique is proposed to extract powerful features which are classified by support vector classification (SVC) with histogram intersection kernel. With the output of SVC, fusion of multi-modalities is performed at the score level based on Bayesian analysis to improve the accuracy. Experiments are conducted using texture images of the UND biometrics data sets Collection F, and average classification accuracy of 97.65% is achieved, which is comparable to the state of the art. Our work can be used in cooperate with existing frontal face based methods for accurate multi-view gender classification.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127031778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}