首页 > 最新文献

2011 International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
A framework for quality-based biometric classifier selection 基于质量的生物识别分类器选择框架
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117518
H. Bhatt, Samarth Bharadwaj, Mayank Vatsa, Richa Singh, A. Ross, A. Noore
Multibiometric systems fuse the evidence (e.g., match scores) pertaining to multiple biometric modalities or classifiers. Most score-level fusion schemes discussed in the literature require the processing (i.e., feature extraction and matching) of every modality prior to invoking the fusion scheme. This paper presents a framework for dynamic classifier selection and fusion based on the quality of the gallery and probe images associated with each modality with multiple classifiers. The quality assessment algorithm for each biometric modality computes a quality vector for the gallery and probe images that is used for classifier selection. These vectors are used to train Support Vector Machines (SVMs) for decision making. In the proposed framework, the biometric modalities are arranged sequentially such that the stronger biometric modality has higher priority for being processed. Since fusion is required only when all unimodal classifiers are rejected by the SVM classifiers, the average computational time of the proposed framework is significantly reduced. Experimental results on different multi-modal databases involving face and fingerprint show that the proposed quality-based classifier selection framework yields good performance even when the quality of the biometric sample is sub-optimal.
多生物识别系统融合了与多个生物识别模式或分类器相关的证据(例如,匹配分数)。文献中讨论的大多数分数级融合方案要求在调用融合方案之前对每个模态进行处理(即特征提取和匹配)。本文提出了一种基于图库质量的动态分类器选择和融合框架,该框架基于多个分类器与每个模态相关联的探测图像。每种生物识别模式的质量评估算法计算用于分类器选择的图库和探针图像的质量向量。这些向量用于训练支持向量机(svm)进行决策。在所提出的框架中,生物识别模态按顺序排列,使得较强的生物识别模态具有较高的处理优先权。由于只有当支持向量机分类器拒绝所有单峰分类器时才需要融合,因此所提出的框架的平均计算时间大大减少。在人脸和指纹等不同的多模态数据库上的实验结果表明,在生物特征样本质量不理想的情况下,本文提出的基于质量的分类器选择框架仍具有良好的性能。
{"title":"A framework for quality-based biometric classifier selection","authors":"H. Bhatt, Samarth Bharadwaj, Mayank Vatsa, Richa Singh, A. Ross, A. Noore","doi":"10.1109/IJCB.2011.6117518","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117518","url":null,"abstract":"Multibiometric systems fuse the evidence (e.g., match scores) pertaining to multiple biometric modalities or classifiers. Most score-level fusion schemes discussed in the literature require the processing (i.e., feature extraction and matching) of every modality prior to invoking the fusion scheme. This paper presents a framework for dynamic classifier selection and fusion based on the quality of the gallery and probe images associated with each modality with multiple classifiers. The quality assessment algorithm for each biometric modality computes a quality vector for the gallery and probe images that is used for classifier selection. These vectors are used to train Support Vector Machines (SVMs) for decision making. In the proposed framework, the biometric modalities are arranged sequentially such that the stronger biometric modality has higher priority for being processed. Since fusion is required only when all unimodal classifiers are rejected by the SVM classifiers, the average computational time of the proposed framework is significantly reduced. Experimental results on different multi-modal databases involving face and fingerprint show that the proposed quality-based classifier selection framework yields good performance even when the quality of the biometric sample is sub-optimal.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130034960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Model-based 3D shape recovery from single images of unknown pose and illumination using a small number of feature points 利用少量特征点从未知姿态和光照的单幅图像中进行基于模型的三维形状恢复
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117493
H. Rara, A. Farag, Todd Davis
This paper proposes a model-based approach for 3D facial shape recovery using a small set of feature points from an input image of unknown pose and illumination. Previous model-based approaches usually require both texture (shading) and shape information from the input image in order to perform 3D facial shape recovery. However, the methods discussed here need only the 2D feature points from a single input image to reconstruct the 3D shape. Experimental results show acceptable reconstructed shapes when compared to the ground truth and previous approaches. This work has potential value in applications such face recognition at-a-distance (FRAD), where the classical shape-from-X (e.g., stereo, motion and shading) algorithms are not feasible due to input image quality.
本文提出了一种基于模型的三维面部形状恢复方法,该方法使用来自未知姿态和光照的输入图像的一小组特征点进行三维面部形状恢复。以前基于模型的方法通常需要从输入图像中获取纹理(阴影)和形状信息,以便进行3D面部形状恢复。然而,这里讨论的方法只需要从单个输入图像中提取二维特征点来重建三维形状。实验结果表明,与地面真实值和以前的方法相比,重建的形状是可以接受的。这项工作在远距离人脸识别(FRAD)等应用中具有潜在价值,其中经典的x形状(例如,立体,运动和阴影)算法由于输入图像质量而不可行。
{"title":"Model-based 3D shape recovery from single images of unknown pose and illumination using a small number of feature points","authors":"H. Rara, A. Farag, Todd Davis","doi":"10.1109/IJCB.2011.6117493","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117493","url":null,"abstract":"This paper proposes a model-based approach for 3D facial shape recovery using a small set of feature points from an input image of unknown pose and illumination. Previous model-based approaches usually require both texture (shading) and shape information from the input image in order to perform 3D facial shape recovery. However, the methods discussed here need only the 2D feature points from a single input image to reconstruct the 3D shape. Experimental results show acceptable reconstructed shapes when compared to the ground truth and previous approaches. This work has potential value in applications such face recognition at-a-distance (FRAD), where the classical shape-from-X (e.g., stereo, motion and shading) algorithms are not feasible due to input image quality.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130479955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
An investigation of keystroke and stylometry traits for authenticating online test takers 在线考生身份验证的击键和体裁特征研究
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117480
John C. Stewart, John V. Monaco, Sung-Hyuk Cha, C. Tappert
The 2008 federal Higher Education Opportunity Act requires institutions of higher learning to make greater access control efforts for the purposes of assuring that students of record are those actually accessing the systems and taking exams in online courses by adopting identification technologies as they become more ubiquitous. To meet these needs, keystroke and stylometry biometrics were investigated towards developing a robust system to authenticate (verify) online test takers. Performance statistics on keystroke, stylometry, and combined keystroke-stylometry systems were obtained on data from 40 test-taking students enrolled in a university course. The best equal-error-rate performance on the keystroke system was 0.5% which is an improvement over earlier reported results on this system. The performance of the stylometry system, however, was rather poor and did not boost the performance of the keystroke system, indicating that stylometry is not suitable for text lengths of short-answer tests unless the features can be substantially improved, at least for the method employed.
2008年联邦《高等教育机会法案》要求高等院校加大访问控制力度,通过采用越来越普遍的身份识别技术,确保有记录的学生是真正访问系统并参加在线课程考试的人。为了满足这些需求,我们研究了击键和体体学生物识别技术,以开发一个强大的系统来验证(验证)在线考生。从40名参加大学课程的应试学生的数据中获得了击键、体体法和组合击键-体体法系统的性能统计数据。在击键系统上的最佳等错误率性能为0.5%,这比之前报告的该系统的结果有所改进。然而,文体学系统的性能相当差,并且没有提高击键系统的性能,这表明文体学不适合短答测试的文本长度,除非这些功能可以得到实质性的改进,至少对于所采用的方法。
{"title":"An investigation of keystroke and stylometry traits for authenticating online test takers","authors":"John C. Stewart, John V. Monaco, Sung-Hyuk Cha, C. Tappert","doi":"10.1109/IJCB.2011.6117480","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117480","url":null,"abstract":"The 2008 federal Higher Education Opportunity Act requires institutions of higher learning to make greater access control efforts for the purposes of assuring that students of record are those actually accessing the systems and taking exams in online courses by adopting identification technologies as they become more ubiquitous. To meet these needs, keystroke and stylometry biometrics were investigated towards developing a robust system to authenticate (verify) online test takers. Performance statistics on keystroke, stylometry, and combined keystroke-stylometry systems were obtained on data from 40 test-taking students enrolled in a university course. The best equal-error-rate performance on the keystroke system was 0.5% which is an improvement over earlier reported results on this system. The performance of the stylometry system, however, was rather poor and did not boost the performance of the keystroke system, indicating that stylometry is not suitable for text lengths of short-answer tests unless the features can be substantially improved, at least for the method employed.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127948395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Model-based 3D gait biometrics 基于模型的三维步态生物识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117582
G. Ariyanto
There have as yet been few gait biometrics approaches which use temporal 3D data. Clearly, 3D gait data conveys more information than 2D data and it is also the natural representation of human gait perceived by human. In this paper we explore the potential of using model-based methods in a 3D volumetric (voxel) gait dataset. We use a structural model including articulated cylinders with 3D Degrees of Freedom (DoF) at each joint to model the human lower legs. We develop a simple yet effective model-fitting algorithm using this gait model, correlation filter and a dynamic programming approach. Human gait kinematics trajectories are then extracted by fitting the gait model into the gait data. At each frame we generate a correlation energy map between the gait model and the data. Dynamic programming is used to extract the gait kinematics trajectories by selecting the most likely path in the whole sequence. We are successfully able to extract both gait structural and dynamics features. Some of the features extracted here are inherently unique to 3D data. Analysis on a database of 46 subjects each with 4 sample sequences, shows an encouraging correct classification rate and suggests that 3D features can contribute even more.
迄今为止,很少有使用时间三维数据的步态生物识别方法。显然,三维步态数据比二维数据传达了更多的信息,也是人类感知到的人类步态的自然表征。在本文中,我们探讨了在三维体素步态数据集中使用基于模型的方法的潜力。我们使用一个结构模型,包括每个关节具有三维自由度(DoF)的铰接圆柱体来模拟人类小腿。我们利用步态模型、相关滤波和动态规划方法开发了一种简单有效的模型拟合算法。然后将步态模型拟合到步态数据中,提取人体步态的运动学轨迹。在每一帧,我们生成步态模型和数据之间的相关能量图。采用动态规划的方法,在整个序列中选择最可能的路径提取步态运动学轨迹。我们成功地提取了步态结构和动力学特征。这里提取的一些特征是3D数据固有的独特特征。对46个受试者的数据库进行分析,每个受试者有4个样本序列,显示出令人鼓舞的正确分类率,并表明3D特征可以贡献更多。
{"title":"Model-based 3D gait biometrics","authors":"G. Ariyanto","doi":"10.1109/IJCB.2011.6117582","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117582","url":null,"abstract":"There have as yet been few gait biometrics approaches which use temporal 3D data. Clearly, 3D gait data conveys more information than 2D data and it is also the natural representation of human gait perceived by human. In this paper we explore the potential of using model-based methods in a 3D volumetric (voxel) gait dataset. We use a structural model including articulated cylinders with 3D Degrees of Freedom (DoF) at each joint to model the human lower legs. We develop a simple yet effective model-fitting algorithm using this gait model, correlation filter and a dynamic programming approach. Human gait kinematics trajectories are then extracted by fitting the gait model into the gait data. At each frame we generate a correlation energy map between the gait model and the data. Dynamic programming is used to extract the gait kinematics trajectories by selecting the most likely path in the whole sequence. We are successfully able to extract both gait structural and dynamics features. Some of the features extracted here are inherently unique to 3D data. Analysis on a database of 46 subjects each with 4 sample sequences, shows an encouraging correct classification rate and suggests that 3D features can contribute even more.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126488181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
Gait energy volumes and frontal gait recognition using depth images 基于深度图像的步态能量体积与正面步态识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117504
Sabesan Sivapalan, Daniel Chen, S. Denman, S. Sridharan, C. Fookes
Gait energy images (GEIs) and its variants form the basis of many recent appearance-based gait recognition systems. The GEI combines good recognition performance with a simple implementation, though it suffers problems inherent to appearance-based approaches, such as being highly view dependent. In this paper, we extend the concept of the GEI to 3D, to create what we call the gait energy volume, or GEV. A basic GEV implementation is tested on the CMU MoBo database, showing improvements over both the GEI baseline and a fused multi-view GEI approach. We also demonstrate the efficacy of this approach on partial volume reconstructions created from frontal depth images, which can be more practically acquired, for example, in biometric portals implemented with stereo cameras, or other depth acquisition systems. Experiments on frontal depth images are evaluated on an in-house developed database captured using the Microsoft Kinect, and demonstrate the validity of the proposed approach.
步态能量图像(GEIs)及其变体构成了许多基于外观的步态识别系统的基础。GEI结合了良好的识别性能和简单的实现,尽管它存在基于外观的方法固有的问题,例如高度依赖视图。在本文中,我们将GEI的概念扩展到3D,以创建我们所谓的步态能量体积,或GEV。在CMU MoBo数据库上测试了一个基本的GEV实现,显示了GEI基线和融合的多视图GEI方法的改进。我们还证明了这种方法在从正面深度图像创建的部分体积重建上的有效性,这些图像可以更实际地获得,例如,在使用立体摄像机或其他深度获取系统实现的生物识别门户中。在使用微软Kinect捕获的内部开发数据库上对正面深度图像进行了实验评估,并证明了所提出方法的有效性。
{"title":"Gait energy volumes and frontal gait recognition using depth images","authors":"Sabesan Sivapalan, Daniel Chen, S. Denman, S. Sridharan, C. Fookes","doi":"10.1109/IJCB.2011.6117504","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117504","url":null,"abstract":"Gait energy images (GEIs) and its variants form the basis of many recent appearance-based gait recognition systems. The GEI combines good recognition performance with a simple implementation, though it suffers problems inherent to appearance-based approaches, such as being highly view dependent. In this paper, we extend the concept of the GEI to 3D, to create what we call the gait energy volume, or GEV. A basic GEV implementation is tested on the CMU MoBo database, showing improvements over both the GEI baseline and a fused multi-view GEI approach. We also demonstrate the efficacy of this approach on partial volume reconstructions created from frontal depth images, which can be more practically acquired, for example, in biometric portals implemented with stereo cameras, or other depth acquisition systems. Experiments on frontal depth images are evaluated on an in-house developed database captured using the Microsoft Kinect, and demonstrate the validity of the proposed approach.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125634425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
Twins 3D face recognition challenge 双胞胎3D人脸识别挑战
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117491
V. Vijayan, K. Bowyer, P. Flynn, Di Huang, Liming Chen, M. Hansen, Omar Ocegueda, S. Shah, I. Kakadiaris
Existing 3D face recognition algorithms have achieved high enough performances against public datasets like FRGC v2, that it is difficult to achieve further significant increases in recognition performance. However, the 3D TEC dataset is a more challenging dataset which consists of 3D scans of 107 pairs of twins that were acquired in a single session, with each subject having a scan of a neutral expression and a smiling expression. The combination of factors related to the facial similarity of identical twins and the variation in facial expression makes this a challenging dataset. We conduct experiments using state of the art face recognition algorithms and present the results. Our results indicate that 3D face recognition of identical twins in the presence of varying facial expressions is far from a solved problem, but that good performance is possible.
现有的3D人脸识别算法已经在像FRGC v2这样的公共数据集上取得了足够高的性能,很难进一步实现识别性能的显著提高。然而,3D TEC数据集是一个更具挑战性的数据集,它包括107对双胞胎的3D扫描,这些扫描是在一次会话中获得的,每个受试者都有一个中性表情和一个微笑表情的扫描。与同卵双胞胎面部相似性和面部表情变化相关的因素的组合使这成为一个具有挑战性的数据集。我们使用最先进的人脸识别算法进行实验并展示结果。我们的研究结果表明,在不同面部表情的情况下对同卵双胞胎进行3D人脸识别还远远没有解决问题,但良好的性能是可能的。
{"title":"Twins 3D face recognition challenge","authors":"V. Vijayan, K. Bowyer, P. Flynn, Di Huang, Liming Chen, M. Hansen, Omar Ocegueda, S. Shah, I. Kakadiaris","doi":"10.1109/IJCB.2011.6117491","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117491","url":null,"abstract":"Existing 3D face recognition algorithms have achieved high enough performances against public datasets like FRGC v2, that it is difficult to achieve further significant increases in recognition performance. However, the 3D TEC dataset is a more challenging dataset which consists of 3D scans of 107 pairs of twins that were acquired in a single session, with each subject having a scan of a neutral expression and a smiling expression. The combination of factors related to the facial similarity of identical twins and the variation in facial expression makes this a challenging dataset. We conduct experiments using state of the art face recognition algorithms and present the results. Our results indicate that 3D face recognition of identical twins in the presence of varying facial expressions is far from a solved problem, but that good performance is possible.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128044706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
A robust eye localization method for low quality face images 低质量人脸图像的鲁棒眼定位方法
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117499
Dong Yi, Zhen Lei, S. Li
Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.
眼睛定位是人脸识别系统的重要组成部分,其精度直接影响人脸识别的性能。虽然各种方法在高质量的人脸图像上已经达到了很高的精度,但在低质量的人脸图像上精度会下降。本文提出了一种针对低质量人脸图像的鲁棒眼睛定位方法,以提高眼睛的检测率和定位精度。首先,我们提出了一个概率级联(P-Cascade)框架,将传统的级联分类器以概率的方式重新表述。P-Cascade可以给每个对最终结果有贡献的图像patch提供机会,而不管该patch被级联接受或拒绝。其次,我们提出了两个扩展,以进一步提高P-Cascade框架的鲁棒性和精度。有:(1)扩展特征集,(2)在多尺度上叠加两个分类器。在JAFFE, BioID, LFW和自采集视频监控数据库上进行的大量实验表明,我们的方法在高质量图像上与最先进的方法相当,并且可以很好地处理低质量图像。这项工作为无约束或监视环境下的人脸识别应用提供了坚实的基础。
{"title":"A robust eye localization method for low quality face images","authors":"Dong Yi, Zhen Lei, S. Li","doi":"10.1109/IJCB.2011.6117499","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117499","url":null,"abstract":"Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"23 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133173702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
On co-training online biometric classifiers 关于协同训练在线生物特征分类器
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117519
H. Bhatt, Samarth Bharadwaj, Richa Singh, Mayank Vatsa, A. Noore, A. Ross
In an operational biometric verification system, changes in biometric data over a period of time can affect the classification accuracy. Online learning has been used for updating the classifier decision boundary. However, this requires labeled data that is only available during new enrolments. This paper presents a biometric classifier update algorithm in which the classifier decision boundary is updated using both labeled enrolment instances and unlabeled probe instances. The proposed co-training online classifier update algorithm is presented as a semi-supervised learning task and is applied to a face verification application. Experiments indicate that the proposed algorithm improves the performance both in terms of classification accuracy and computational time.
在一个可操作的生物特征验证系统中,一段时间内生物特征数据的变化会影响分类的准确性。在线学习被用于更新分类器决策边界。然而,这需要仅在新注册期间可用的标记数据。本文提出了一种生物特征分类器更新算法,该算法使用标记的注册实例和未标记的探测实例更新分类器决策边界。本文提出的联合训练在线分类器更新算法是一种半监督学习任务,并应用于人脸验证应用。实验表明,该算法在分类精度和计算时间上都有较大的提高。
{"title":"On co-training online biometric classifiers","authors":"H. Bhatt, Samarth Bharadwaj, Richa Singh, Mayank Vatsa, A. Noore, A. Ross","doi":"10.1109/IJCB.2011.6117519","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117519","url":null,"abstract":"In an operational biometric verification system, changes in biometric data over a period of time can affect the classification accuracy. Online learning has been used for updating the classifier decision boundary. However, this requires labeled data that is only available during new enrolments. This paper presents a biometric classifier update algorithm in which the classifier decision boundary is updated using both labeled enrolment instances and unlabeled probe instances. The proposed co-training online classifier update algorithm is presented as a semi-supervised learning task and is applied to a face verification application. Experiments indicate that the proposed algorithm improves the performance both in terms of classification accuracy and computational time.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134438539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Fusion of structured projections for cancelable face identity verification 可取消人脸身份验证的结构化投影融合
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117588
B. Oh, K. Toh
This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.
本文提出了一种基于特征加权的结构化随机投影,用于可取消的身份验证。从本质上讲,在匹配过程之前,投影的面部特征是基于它们的识别能力进行加权的。为了隐藏人脸身份,对多个模板进行了不同变换的平均。最后,通过总错误率最小化,在分数水平上融合从部分人脸图像中提取的多个可取消模板。我们使用AR、FERET和Sheffield数据库对两种实验场景进行的实证实验表明,所提出的方法在验证精度方面始终优于最先进的无监督方法。
{"title":"Fusion of structured projections for cancelable face identity verification","authors":"B. Oh, K. Toh","doi":"10.1109/IJCB.2011.6117588","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117588","url":null,"abstract":"This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reliability-balanced feature level fusion for fuzzy commitment scheme 模糊承诺方案的可靠性平衡特征级融合
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117535
C. Rathgeb, A. Uhl, Peter Wild
Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.
模糊承诺方案作为一种可靠的方法将密钥绑定到从不同生物特征模态中提取的二进制特征向量上。此外,还尝试将模糊承诺方案扩展为包含多个生物特征向量的方案。在这些方案中,通过特征级融合的潜在改进通常被忽视。提出了一种用于模糊承诺方案的特征级融合技术。所提出的可靠性平衡特征级融合旨在重新排列和组合两个二进制生物特征模板,从而在模糊承诺方案中更有效地利用纠错能力,从而提高密钥检索率。在虹膜生物特征数据上进行的实验中,可靠性平衡特征级融合显著优于传统的多生物特征模糊承诺方案,证实了所提出技术的合理性。
{"title":"Reliability-balanced feature level fusion for fuzzy commitment scheme","authors":"C. Rathgeb, A. Uhl, Peter Wild","doi":"10.1109/IJCB.2011.6117535","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117535","url":null,"abstract":"Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129141853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
2011 International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1