Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117518
H. Bhatt, Samarth Bharadwaj, Mayank Vatsa, Richa Singh, A. Ross, A. Noore
Multibiometric systems fuse the evidence (e.g., match scores) pertaining to multiple biometric modalities or classifiers. Most score-level fusion schemes discussed in the literature require the processing (i.e., feature extraction and matching) of every modality prior to invoking the fusion scheme. This paper presents a framework for dynamic classifier selection and fusion based on the quality of the gallery and probe images associated with each modality with multiple classifiers. The quality assessment algorithm for each biometric modality computes a quality vector for the gallery and probe images that is used for classifier selection. These vectors are used to train Support Vector Machines (SVMs) for decision making. In the proposed framework, the biometric modalities are arranged sequentially such that the stronger biometric modality has higher priority for being processed. Since fusion is required only when all unimodal classifiers are rejected by the SVM classifiers, the average computational time of the proposed framework is significantly reduced. Experimental results on different multi-modal databases involving face and fingerprint show that the proposed quality-based classifier selection framework yields good performance even when the quality of the biometric sample is sub-optimal.
{"title":"A framework for quality-based biometric classifier selection","authors":"H. Bhatt, Samarth Bharadwaj, Mayank Vatsa, Richa Singh, A. Ross, A. Noore","doi":"10.1109/IJCB.2011.6117518","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117518","url":null,"abstract":"Multibiometric systems fuse the evidence (e.g., match scores) pertaining to multiple biometric modalities or classifiers. Most score-level fusion schemes discussed in the literature require the processing (i.e., feature extraction and matching) of every modality prior to invoking the fusion scheme. This paper presents a framework for dynamic classifier selection and fusion based on the quality of the gallery and probe images associated with each modality with multiple classifiers. The quality assessment algorithm for each biometric modality computes a quality vector for the gallery and probe images that is used for classifier selection. These vectors are used to train Support Vector Machines (SVMs) for decision making. In the proposed framework, the biometric modalities are arranged sequentially such that the stronger biometric modality has higher priority for being processed. Since fusion is required only when all unimodal classifiers are rejected by the SVM classifiers, the average computational time of the proposed framework is significantly reduced. Experimental results on different multi-modal databases involving face and fingerprint show that the proposed quality-based classifier selection framework yields good performance even when the quality of the biometric sample is sub-optimal.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130034960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117493
H. Rara, A. Farag, Todd Davis
This paper proposes a model-based approach for 3D facial shape recovery using a small set of feature points from an input image of unknown pose and illumination. Previous model-based approaches usually require both texture (shading) and shape information from the input image in order to perform 3D facial shape recovery. However, the methods discussed here need only the 2D feature points from a single input image to reconstruct the 3D shape. Experimental results show acceptable reconstructed shapes when compared to the ground truth and previous approaches. This work has potential value in applications such face recognition at-a-distance (FRAD), where the classical shape-from-X (e.g., stereo, motion and shading) algorithms are not feasible due to input image quality.
{"title":"Model-based 3D shape recovery from single images of unknown pose and illumination using a small number of feature points","authors":"H. Rara, A. Farag, Todd Davis","doi":"10.1109/IJCB.2011.6117493","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117493","url":null,"abstract":"This paper proposes a model-based approach for 3D facial shape recovery using a small set of feature points from an input image of unknown pose and illumination. Previous model-based approaches usually require both texture (shading) and shape information from the input image in order to perform 3D facial shape recovery. However, the methods discussed here need only the 2D feature points from a single input image to reconstruct the 3D shape. Experimental results show acceptable reconstructed shapes when compared to the ground truth and previous approaches. This work has potential value in applications such face recognition at-a-distance (FRAD), where the classical shape-from-X (e.g., stereo, motion and shading) algorithms are not feasible due to input image quality.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130479955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117480
John C. Stewart, John V. Monaco, Sung-Hyuk Cha, C. Tappert
The 2008 federal Higher Education Opportunity Act requires institutions of higher learning to make greater access control efforts for the purposes of assuring that students of record are those actually accessing the systems and taking exams in online courses by adopting identification technologies as they become more ubiquitous. To meet these needs, keystroke and stylometry biometrics were investigated towards developing a robust system to authenticate (verify) online test takers. Performance statistics on keystroke, stylometry, and combined keystroke-stylometry systems were obtained on data from 40 test-taking students enrolled in a university course. The best equal-error-rate performance on the keystroke system was 0.5% which is an improvement over earlier reported results on this system. The performance of the stylometry system, however, was rather poor and did not boost the performance of the keystroke system, indicating that stylometry is not suitable for text lengths of short-answer tests unless the features can be substantially improved, at least for the method employed.
{"title":"An investigation of keystroke and stylometry traits for authenticating online test takers","authors":"John C. Stewart, John V. Monaco, Sung-Hyuk Cha, C. Tappert","doi":"10.1109/IJCB.2011.6117480","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117480","url":null,"abstract":"The 2008 federal Higher Education Opportunity Act requires institutions of higher learning to make greater access control efforts for the purposes of assuring that students of record are those actually accessing the systems and taking exams in online courses by adopting identification technologies as they become more ubiquitous. To meet these needs, keystroke and stylometry biometrics were investigated towards developing a robust system to authenticate (verify) online test takers. Performance statistics on keystroke, stylometry, and combined keystroke-stylometry systems were obtained on data from 40 test-taking students enrolled in a university course. The best equal-error-rate performance on the keystroke system was 0.5% which is an improvement over earlier reported results on this system. The performance of the stylometry system, however, was rather poor and did not boost the performance of the keystroke system, indicating that stylometry is not suitable for text lengths of short-answer tests unless the features can be substantially improved, at least for the method employed.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127948395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117582
G. Ariyanto
There have as yet been few gait biometrics approaches which use temporal 3D data. Clearly, 3D gait data conveys more information than 2D data and it is also the natural representation of human gait perceived by human. In this paper we explore the potential of using model-based methods in a 3D volumetric (voxel) gait dataset. We use a structural model including articulated cylinders with 3D Degrees of Freedom (DoF) at each joint to model the human lower legs. We develop a simple yet effective model-fitting algorithm using this gait model, correlation filter and a dynamic programming approach. Human gait kinematics trajectories are then extracted by fitting the gait model into the gait data. At each frame we generate a correlation energy map between the gait model and the data. Dynamic programming is used to extract the gait kinematics trajectories by selecting the most likely path in the whole sequence. We are successfully able to extract both gait structural and dynamics features. Some of the features extracted here are inherently unique to 3D data. Analysis on a database of 46 subjects each with 4 sample sequences, shows an encouraging correct classification rate and suggests that 3D features can contribute even more.
{"title":"Model-based 3D gait biometrics","authors":"G. Ariyanto","doi":"10.1109/IJCB.2011.6117582","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117582","url":null,"abstract":"There have as yet been few gait biometrics approaches which use temporal 3D data. Clearly, 3D gait data conveys more information than 2D data and it is also the natural representation of human gait perceived by human. In this paper we explore the potential of using model-based methods in a 3D volumetric (voxel) gait dataset. We use a structural model including articulated cylinders with 3D Degrees of Freedom (DoF) at each joint to model the human lower legs. We develop a simple yet effective model-fitting algorithm using this gait model, correlation filter and a dynamic programming approach. Human gait kinematics trajectories are then extracted by fitting the gait model into the gait data. At each frame we generate a correlation energy map between the gait model and the data. Dynamic programming is used to extract the gait kinematics trajectories by selecting the most likely path in the whole sequence. We are successfully able to extract both gait structural and dynamics features. Some of the features extracted here are inherently unique to 3D data. Analysis on a database of 46 subjects each with 4 sample sequences, shows an encouraging correct classification rate and suggests that 3D features can contribute even more.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126488181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117504
Sabesan Sivapalan, Daniel Chen, S. Denman, S. Sridharan, C. Fookes
Gait energy images (GEIs) and its variants form the basis of many recent appearance-based gait recognition systems. The GEI combines good recognition performance with a simple implementation, though it suffers problems inherent to appearance-based approaches, such as being highly view dependent. In this paper, we extend the concept of the GEI to 3D, to create what we call the gait energy volume, or GEV. A basic GEV implementation is tested on the CMU MoBo database, showing improvements over both the GEI baseline and a fused multi-view GEI approach. We also demonstrate the efficacy of this approach on partial volume reconstructions created from frontal depth images, which can be more practically acquired, for example, in biometric portals implemented with stereo cameras, or other depth acquisition systems. Experiments on frontal depth images are evaluated on an in-house developed database captured using the Microsoft Kinect, and demonstrate the validity of the proposed approach.
{"title":"Gait energy volumes and frontal gait recognition using depth images","authors":"Sabesan Sivapalan, Daniel Chen, S. Denman, S. Sridharan, C. Fookes","doi":"10.1109/IJCB.2011.6117504","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117504","url":null,"abstract":"Gait energy images (GEIs) and its variants form the basis of many recent appearance-based gait recognition systems. The GEI combines good recognition performance with a simple implementation, though it suffers problems inherent to appearance-based approaches, such as being highly view dependent. In this paper, we extend the concept of the GEI to 3D, to create what we call the gait energy volume, or GEV. A basic GEV implementation is tested on the CMU MoBo database, showing improvements over both the GEI baseline and a fused multi-view GEI approach. We also demonstrate the efficacy of this approach on partial volume reconstructions created from frontal depth images, which can be more practically acquired, for example, in biometric portals implemented with stereo cameras, or other depth acquisition systems. Experiments on frontal depth images are evaluated on an in-house developed database captured using the Microsoft Kinect, and demonstrate the validity of the proposed approach.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125634425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117491
V. Vijayan, K. Bowyer, P. Flynn, Di Huang, Liming Chen, M. Hansen, Omar Ocegueda, S. Shah, I. Kakadiaris
Existing 3D face recognition algorithms have achieved high enough performances against public datasets like FRGC v2, that it is difficult to achieve further significant increases in recognition performance. However, the 3D TEC dataset is a more challenging dataset which consists of 3D scans of 107 pairs of twins that were acquired in a single session, with each subject having a scan of a neutral expression and a smiling expression. The combination of factors related to the facial similarity of identical twins and the variation in facial expression makes this a challenging dataset. We conduct experiments using state of the art face recognition algorithms and present the results. Our results indicate that 3D face recognition of identical twins in the presence of varying facial expressions is far from a solved problem, but that good performance is possible.
{"title":"Twins 3D face recognition challenge","authors":"V. Vijayan, K. Bowyer, P. Flynn, Di Huang, Liming Chen, M. Hansen, Omar Ocegueda, S. Shah, I. Kakadiaris","doi":"10.1109/IJCB.2011.6117491","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117491","url":null,"abstract":"Existing 3D face recognition algorithms have achieved high enough performances against public datasets like FRGC v2, that it is difficult to achieve further significant increases in recognition performance. However, the 3D TEC dataset is a more challenging dataset which consists of 3D scans of 107 pairs of twins that were acquired in a single session, with each subject having a scan of a neutral expression and a smiling expression. The combination of factors related to the facial similarity of identical twins and the variation in facial expression makes this a challenging dataset. We conduct experiments using state of the art face recognition algorithms and present the results. Our results indicate that 3D face recognition of identical twins in the presence of varying facial expressions is far from a solved problem, but that good performance is possible.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128044706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117499
Dong Yi, Zhen Lei, S. Li
Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.
{"title":"A robust eye localization method for low quality face images","authors":"Dong Yi, Zhen Lei, S. Li","doi":"10.1109/IJCB.2011.6117499","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117499","url":null,"abstract":"Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"23 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133173702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117519
H. Bhatt, Samarth Bharadwaj, Richa Singh, Mayank Vatsa, A. Noore, A. Ross
In an operational biometric verification system, changes in biometric data over a period of time can affect the classification accuracy. Online learning has been used for updating the classifier decision boundary. However, this requires labeled data that is only available during new enrolments. This paper presents a biometric classifier update algorithm in which the classifier decision boundary is updated using both labeled enrolment instances and unlabeled probe instances. The proposed co-training online classifier update algorithm is presented as a semi-supervised learning task and is applied to a face verification application. Experiments indicate that the proposed algorithm improves the performance both in terms of classification accuracy and computational time.
{"title":"On co-training online biometric classifiers","authors":"H. Bhatt, Samarth Bharadwaj, Richa Singh, Mayank Vatsa, A. Noore, A. Ross","doi":"10.1109/IJCB.2011.6117519","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117519","url":null,"abstract":"In an operational biometric verification system, changes in biometric data over a period of time can affect the classification accuracy. Online learning has been used for updating the classifier decision boundary. However, this requires labeled data that is only available during new enrolments. This paper presents a biometric classifier update algorithm in which the classifier decision boundary is updated using both labeled enrolment instances and unlabeled probe instances. The proposed co-training online classifier update algorithm is presented as a semi-supervised learning task and is applied to a face verification application. Experiments indicate that the proposed algorithm improves the performance both in terms of classification accuracy and computational time.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134438539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117588
B. Oh, K. Toh
This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.
{"title":"Fusion of structured projections for cancelable face identity verification","authors":"B. Oh, K. Toh","doi":"10.1109/IJCB.2011.6117588","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117588","url":null,"abstract":"This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117535
C. Rathgeb, A. Uhl, Peter Wild
Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.
{"title":"Reliability-balanced feature level fusion for fuzzy commitment scheme","authors":"C. Rathgeb, A. Uhl, Peter Wild","doi":"10.1109/IJCB.2011.6117535","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117535","url":null,"abstract":"Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129141853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}