首页 > 最新文献

2011 International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Is gender classification across ethnicity feasible using discriminant functions? 使用判别函数进行跨种族的性别分类是否可行?
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117524
Tejas I. Dhamecha, A. Sankaran, Richa Singh, Mayank Vatsa
Over the years, automatic gender recognition has been used in many applications. However, limited research has been done on analyzing gender recognition across ethnicity scenario. This research aims at studying the performance of discriminant functions including Principal Component Analysis, Linear Discriminant Analysis and Subclass Discriminant Analysis with the availability of limited training database and unseen ethnicity variations. The experiments are performed on a heterogeneous database of 8112 images that includes variations in illumination, expression, minor pose and ethnicity. Contrary to existing literature, the results show that PCA provides comparable but slightly better performance compared to PCA+LDA, PCA+SDA and PCA+SVM. The results also suggest that linear discriminant functions provide good generalization capability even with limited number of training samples, principal components and with cross-ethnicity variations.
多年来,自动性别识别已在许多应用中使用。然而,对跨种族情景的性别认知分析研究有限。本研究旨在研究在训练资料库有限且种族差异不可见的情况下,主成分分析、线性判别分析和子类判别分析等判别函数的性能。实验是在一个包含8112张图像的异构数据库上进行的,这些图像包括光照、表情、小姿势和种族的变化。与已有文献相反,结果表明PCA与PCA+LDA、PCA+SDA和PCA+SVM相比具有相当的性能,但略好。结果还表明,即使在有限的训练样本、主成分和跨种族差异的情况下,线性判别函数也具有良好的泛化能力。
{"title":"Is gender classification across ethnicity feasible using discriminant functions?","authors":"Tejas I. Dhamecha, A. Sankaran, Richa Singh, Mayank Vatsa","doi":"10.1109/IJCB.2011.6117524","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117524","url":null,"abstract":"Over the years, automatic gender recognition has been used in many applications. However, limited research has been done on analyzing gender recognition across ethnicity scenario. This research aims at studying the performance of discriminant functions including Principal Component Analysis, Linear Discriminant Analysis and Subclass Discriminant Analysis with the availability of limited training database and unseen ethnicity variations. The experiments are performed on a heterogeneous database of 8112 images that includes variations in illumination, expression, minor pose and ethnicity. Contrary to existing literature, the results show that PCA provides comparable but slightly better performance compared to PCA+LDA, PCA+SDA and PCA+SVM. The results also suggest that linear discriminant functions provide good generalization capability even with limited number of training samples, principal components and with cross-ethnicity variations.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121192557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Face spoofing detection from single images using micro-texture analysis 基于微纹理分析的单幅人脸欺骗检测
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117510
Jukka Määttä, A. Hadid, M. Pietikäinen
Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterization of printing artifacts, and differences in light reflection, we propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture features. Hence, we present a novel approach based on analyzing facial image textures for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyzes the texture of the facial images using multi-scale local binary patterns (LBP). Compared to many previous works, our proposed approach is robust, computationally fast and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on a publicly available database showed excellent results compared to existing works.
目前的面部生物识别系统很容易受到欺骗攻击。当一个人试图通过伪造数据来伪装成其他人,从而获得非法访问权限时,就会发生欺骗攻击。受图像质量评估、打印工件特征表征以及光反射差异的启发,我们提出从纹理分析的角度来解决欺骗检测问题。事实上,面部指纹通常包含可以使用纹理特征很好地检测到的印刷质量缺陷。因此,我们提出了一种基于分析面部图像纹理的新方法来检测相机前是否有真人或人脸指纹。该方法利用多尺度局部二值模式(LBP)分析人脸图像的纹理特征。与许多先前的工作相比,我们提出的方法鲁棒性强,计算速度快,不需要用户合作。此外,用于欺骗检测的纹理特征也可以用于人脸识别。这为欺骗检测和人脸识别的耦合提供了独特的特征空间。在一个公开可用的数据库上进行广泛的实验分析,与现有的工作相比,结果非常好。
{"title":"Face spoofing detection from single images using micro-texture analysis","authors":"Jukka Määttä, A. Hadid, M. Pietikäinen","doi":"10.1109/IJCB.2011.6117510","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117510","url":null,"abstract":"Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterization of printing artifacts, and differences in light reflection, we propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture features. Hence, we present a novel approach based on analyzing facial image textures for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyzes the texture of the facial images using multi-scale local binary patterns (LBP). Compared to many previous works, our proposed approach is robust, computationally fast and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on a publicly available database showed excellent results compared to existing works.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116331996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 629
Reliability-balanced feature level fusion for fuzzy commitment scheme 模糊承诺方案的可靠性平衡特征级融合
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117535
C. Rathgeb, A. Uhl, Peter Wild
Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.
模糊承诺方案作为一种可靠的方法将密钥绑定到从不同生物特征模态中提取的二进制特征向量上。此外,还尝试将模糊承诺方案扩展为包含多个生物特征向量的方案。在这些方案中,通过特征级融合的潜在改进通常被忽视。提出了一种用于模糊承诺方案的特征级融合技术。所提出的可靠性平衡特征级融合旨在重新排列和组合两个二进制生物特征模板,从而在模糊承诺方案中更有效地利用纠错能力,从而提高密钥检索率。在虹膜生物特征数据上进行的实验中,可靠性平衡特征级融合显著优于传统的多生物特征模糊承诺方案,证实了所提出技术的合理性。
{"title":"Reliability-balanced feature level fusion for fuzzy commitment scheme","authors":"C. Rathgeb, A. Uhl, Peter Wild","doi":"10.1109/IJCB.2011.6117535","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117535","url":null,"abstract":"Fuzzy commitment schemes have been established as a reliable means of binding cryptographic keys to binary feature vectors extracted from diverse biometric modalities. In addition, attempts have been made to extend fuzzy commitment schemes to incorporate multiple biometric feature vectors. Within these schemes potential improvements through feature level fusion are commonly neglected. In this paper a feature level fusion technique for fuzzy commitment schemes is presented. The proposed reliability-balanced feature level fusion is designed to re-arrange and combine two binary biometric templates in a way that error correction capacities are exploited more effectively within a fuzzy commitment scheme yielding improvement with respect to key-retrieval rates. In experiments, which are carried out on iris-biometric data, reliability-balanced feature level fusion significantly outperforms conventional approaches to multi-biometric fuzzy commitment schemes confirming the soundness of the proposed technique.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129141853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Fusion of structured projections for cancelable face identity verification 可取消人脸身份验证的结构化投影融合
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117588
B. Oh, K. Toh
This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.
本文提出了一种基于特征加权的结构化随机投影,用于可取消的身份验证。从本质上讲,在匹配过程之前,投影的面部特征是基于它们的识别能力进行加权的。为了隐藏人脸身份,对多个模板进行了不同变换的平均。最后,通过总错误率最小化,在分数水平上融合从部分人脸图像中提取的多个可取消模板。我们使用AR、FERET和Sheffield数据库对两种实验场景进行的实证实验表明,所提出的方法在验证精度方面始终优于最先进的无监督方法。
{"title":"Fusion of structured projections for cancelable face identity verification","authors":"B. Oh, K. Toh","doi":"10.1109/IJCB.2011.6117588","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117588","url":null,"abstract":"This work proposes a structured random projection via feature weighting for cancelable identity verification. Essentially, projected facial features are weighted based on their discrimination capability prior to a matching process. In order to conceal the face identity, an averaging over several templates with different transformations is performed. Finally, several cancelable templates extracted from partial face images are fused at score level via a total error rate minimization. Our empirical experiments on two experimental scenarios using AR, FERET and Sheffield databases show that the proposed method consistently outperforms competing state-of-the-art un-supervised methods in terms of verification accuracy.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast speaker verification on mobile phone data using boosted slice classifiers 使用增强切片分类器对手机数据进行快速说话者验证
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117492
A. Roy, M. Magimai.-Doss, S. Marcel
In this work, we investigate a novel computationally efficient speaker verification (SV) system involving boosted ensembles of simple threshold-based classifiers. The system is based on a novel set of features called “slice features”. Both the system and the features were inspired by the recent success of pixel comparison-based ensemble approaches in the computer vision domain. The performance of the proposed system was evaluated through speaker verification experiments on the MOBIO corpus containing mobile phone speech, according to a challenging protocol. The system was found to perform reasonably well, compared to multiple state-of-the-art SV systems, with the benefit of significantly lower computational complexity. Its dual characteristics of good performance and computational efficiency could be important factors in the context of SV system implementation on portable devices like mobile phones.
在这项工作中,我们研究了一种新的计算效率高的说话人验证(SV)系统,该系统涉及简单阈值分类器的增强集合。该系统基于一组被称为“切片特征”的新特征。该系统和特征都受到了最近在计算机视觉领域基于像素比较的集成方法的成功启发。根据一个具有挑战性的协议,通过在包含手机语音的MOBIO语料库上的说话人验证实验来评估该系统的性能。与多个最先进的SV系统相比,该系统的性能相当好,而且计算复杂度显著降低。其良好的性能和计算效率的双重特性可能是在移动电话等便携式设备上实现SV系统的重要因素。
{"title":"Fast speaker verification on mobile phone data using boosted slice classifiers","authors":"A. Roy, M. Magimai.-Doss, S. Marcel","doi":"10.1109/IJCB.2011.6117492","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117492","url":null,"abstract":"In this work, we investigate a novel computationally efficient speaker verification (SV) system involving boosted ensembles of simple threshold-based classifiers. The system is based on a novel set of features called “slice features”. Both the system and the features were inspired by the recent success of pixel comparison-based ensemble approaches in the computer vision domain. The performance of the proposed system was evaluated through speaker verification experiments on the MOBIO corpus containing mobile phone speech, according to a challenging protocol. The system was found to perform reasonably well, compared to multiple state-of-the-art SV systems, with the benefit of significantly lower computational complexity. Its dual characteristics of good performance and computational efficiency could be important factors in the context of SV system implementation on portable devices like mobile phones.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115902617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gait-based age estimation using a whole-generation gait database 基于全代步态数据库的步态年龄估计
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117531
Yasushi Makihara, Mayu Okumura, Haruyuki Iwama, Y. Yagi
This paper addresses gait-based age estimation using a large-scale whole-generation gait database. Previous work on gait-based age estimation evaluated their methods using databases that included only 170 subjects at most with a limited age variation, which was insufficient to statistically demonstrate the possibility of gait-based age estimation. Therefore, we first constructed a much larger whole-generation gait database which includes 1,728 subjects with ages ranging from 2 to 94 years. We then provided a baseline algorithm for gait-based age estimation implemented by Gaussian process regression, which has achieved successes in the face-based age estimation field, in conjunction with silhouette-based gait features such as an averaged silhouette (or Gait Energy Image) which has been used extensively in many gait recognition algorithms. Finally, experiments using the whole-generation gait database demonstrated the viability of gait-based age estimation.
本文使用大规模全代步态数据库解决基于步态的年龄估计问题。先前基于步态的年龄估计工作使用的数据库最多只包括170名受试者,年龄变化有限,这不足以从统计上证明基于步态的年龄估计的可能性。因此,我们首先构建了一个更大的全代步态数据库,其中包括1,728名年龄从2岁到94岁不等的受试者。然后,我们提供了一种基于高斯过程回归的步态年龄估计基线算法,该算法与基于轮廓的步态特征(如平均轮廓(或步态能量图像))相结合,在基于人脸的年龄估计领域取得了成功,该特征已广泛用于许多步态识别算法中。最后,使用全代步态数据库的实验证明了基于步态的年龄估计的可行性。
{"title":"Gait-based age estimation using a whole-generation gait database","authors":"Yasushi Makihara, Mayu Okumura, Haruyuki Iwama, Y. Yagi","doi":"10.1109/IJCB.2011.6117531","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117531","url":null,"abstract":"This paper addresses gait-based age estimation using a large-scale whole-generation gait database. Previous work on gait-based age estimation evaluated their methods using databases that included only 170 subjects at most with a limited age variation, which was insufficient to statistically demonstrate the possibility of gait-based age estimation. Therefore, we first constructed a much larger whole-generation gait database which includes 1,728 subjects with ages ranging from 2 to 94 years. We then provided a baseline algorithm for gait-based age estimation implemented by Gaussian process regression, which has achieved successes in the face-based age estimation field, in conjunction with silhouette-based gait features such as an averaged silhouette (or Gait Energy Image) which has been used extensively in many gait recognition algorithms. Finally, experiments using the whole-generation gait database demonstrated the viability of gait-based age estimation.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"606 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116384619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
The effect of time on ear biometrics 时间对耳朵生物特征的影响
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117584
Mina I. S. Ibrahim, M. Nixon, S. Mahmoodi
We present an experimental study to demonstrate the effect of the time difference in image acquisition for gallery and probe on the performance of ear recognition. This experimental research is the first study on the time effect on ear biometrics. For the purpose of recognition, we convolve banana wavelets with an ear image and then apply local binary pattern on the convolved image. The histograms of the produced image are then used as features to describe an ear. A histogram intersection technique is then applied on the histograms of two ears to measure the ear similarity for the recognition purposes. We also use analysis of variance (ANOVA) to select features to identify the best banana wavelets for the recognition process. The experimental results show that the recognition rate is only slightly reduced by time. The average recognition rate of 98.5% is achieved for an eleven month-difference between gallery and probe on an un-occluded ear dataset of 1491 images of ears selected from Southampton University ear database.
本文提出了一项实验研究,证明了通道和探头图像采集的时间差对耳朵识别性能的影响。本实验首次研究了时间效应对耳部生物特征的影响。为了识别目的,我们将香蕉小波与耳朵图像进行卷积,然后在卷积图像上应用局部二值模式。然后将生成的图像的直方图用作描述耳朵的特征。然后在两耳直方图上应用直方图相交技术来测量耳朵的相似度以达到识别目的。我们还使用方差分析(ANOVA)来选择特征,以识别最佳的香蕉小波。实验结果表明,随着时间的推移,识别率仅略有下降。在南安普敦大学耳朵数据库中选择的1491张耳朵图像的未遮挡耳朵数据集上,画廊和探针之间的11个月差异的平均识别率达到了98.5%。
{"title":"The effect of time on ear biometrics","authors":"Mina I. S. Ibrahim, M. Nixon, S. Mahmoodi","doi":"10.1109/IJCB.2011.6117584","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117584","url":null,"abstract":"We present an experimental study to demonstrate the effect of the time difference in image acquisition for gallery and probe on the performance of ear recognition. This experimental research is the first study on the time effect on ear biometrics. For the purpose of recognition, we convolve banana wavelets with an ear image and then apply local binary pattern on the convolved image. The histograms of the produced image are then used as features to describe an ear. A histogram intersection technique is then applied on the histograms of two ears to measure the ear similarity for the recognition purposes. We also use analysis of variance (ANOVA) to select features to identify the best banana wavelets for the recognition process. The experimental results show that the recognition rate is only slightly reduced by time. The average recognition rate of 98.5% is achieved for an eleven month-difference between gallery and probe on an un-occluded ear dataset of 1491 images of ears selected from Southampton University ear database.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116766256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Biometric identification via eye movement scanpaths in reading 阅读时眼动扫描的生物特征识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117536
C. Holland, Oleg V. Komogortsev
This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye movements and their aggregated scanpath characteristics, including: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scanpath inflections, the amplitude-duration relationship, the main sequence relationship, and the pairwise distance between fixations. As well, an information fusion method for combining these metrics into a single identification algorithm is presented. With limited testing this method was able to identify subjects with an equal error rate of 27%. These results indicate that scanpath-based biometric identification holds promise as a behavioral biometric technique.
本文提出了一种客观评价各种基于眼动的生物特征及其准确和精确区分独特个体的能力。由于复杂的神经相互作用和眼外肌特性,眼动具有独特的抗伪造性。考虑的候选生物特征涵盖了许多基本的眼球运动及其聚合的扫描路径特征,包括:注视次数、平均注视持续时间、平均扫视幅度、平均扫视速度、平均扫视峰值速度、速度波形、扫描路径长度、扫描路径面积、感兴趣区域、扫描路径弯曲、振幅-持续时间关系、主序列关系以及注视之间的成对距离。同时,提出了一种信息融合方法,将这些指标组合成一个单一的识别算法。通过有限的测试,该方法能够以27%的错误率识别受试者。这些结果表明,基于扫描路径的生物识别技术有望成为一种行为生物识别技术。
{"title":"Biometric identification via eye movement scanpaths in reading","authors":"C. Holland, Oleg V. Komogortsev","doi":"10.1109/IJCB.2011.6117536","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117536","url":null,"abstract":"This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye movements and their aggregated scanpath characteristics, including: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scanpath inflections, the amplitude-duration relationship, the main sequence relationship, and the pairwise distance between fixations. As well, an information fusion method for combining these metrics into a single identification algorithm is presented. With limited testing this method was able to identify subjects with an equal error rate of 27%. These results indicate that scanpath-based biometric identification holds promise as a behavioral biometric technique.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115204833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 140
Face recognition across time lapse: On learning feature subspaces 跨越时间推移的人脸识别:基于特征子空间的学习
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117547
Brendan Klare, Anil K. Jain
There is a growing interest in understanding the impact of aging on face recognition performance, as well as designing recognition algorithms that are mostly invariant to temporal changes. While some success has been made on this front, a fundamental questions has yet to be answered: do face recognition systems that compensate for the effects of aging compromise recognition performance for faces that have not undergone any aging? The studies in this paper help confirm that age invariant systems do seem to decrease performance in non-aging scenarios. This is demonstrated by performing training experiments on the largest face aging dataset studied in the literature to date (over 200,000 images from roughly 64,000 subjects). Further experiments conducted in this research help demonstrate the impact of aging on two leading commercial face recognition systems. We also determine the regions of the face that remain the most stable over time.
人们对理解老化对人脸识别性能的影响以及设计对时间变化基本不变的识别算法越来越感兴趣。虽然在这方面取得了一些成功,但一个基本问题尚未得到回答:补偿衰老影响的人脸识别系统是否会损害未经历任何衰老的人脸的识别性能?本文的研究有助于证实年龄不变系统在非衰老情况下确实会降低性能。这是通过在迄今为止文献中研究的最大的面部老化数据集(来自大约64,000名受试者的200,000多张图像)上进行训练实验来证明的。在本研究中进行的进一步实验有助于证明衰老对两种领先的商用人脸识别系统的影响。我们还确定了随着时间的推移,面部的哪些区域保持最稳定。
{"title":"Face recognition across time lapse: On learning feature subspaces","authors":"Brendan Klare, Anil K. Jain","doi":"10.1109/IJCB.2011.6117547","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117547","url":null,"abstract":"There is a growing interest in understanding the impact of aging on face recognition performance, as well as designing recognition algorithms that are mostly invariant to temporal changes. While some success has been made on this front, a fundamental questions has yet to be answered: do face recognition systems that compensate for the effects of aging compromise recognition performance for faces that have not undergone any aging? The studies in this paper help confirm that age invariant systems do seem to decrease performance in non-aging scenarios. This is demonstrated by performing training experiments on the largest face aging dataset studied in the literature to date (over 200,000 images from roughly 64,000 subjects). Further experiments conducted in this research help demonstrate the impact of aging on two leading commercial face recognition systems. We also determine the regions of the face that remain the most stable over time.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115444191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Towards automated pose invariant 3D dental biometrics 走向自动姿态不变三维牙科生物识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117541
Xin Zhong, Deping Yu, K. Foong, T. Sim, Y. Wong, Ho-Lun Cheng
A novel pose invariant 3D dental biometrics framework is proposed for human identification by matching dental plasters in this paper. Using 3D overcomes a number of key problems that plague 2D methods. As best as we can tell, our study is the first attempt at 3D dental biometrics. It includes a multi-scale feature extraction algorithm for extracting pose invariant feature points and a triplet-correspondence algorithm for pose estimation. Preliminary experimental result achieves 100% rank-1 accuracy by matching 7 postmortem (PM) samples against 100 ante-mortem (AM) samples. In addition, towards a fully automated 3D dental identification testing, the accuracy achieves 71.4% at rank-1 accuracy and 100% at rank-4 accuracy. Comparing with the existing algorithms, the feature point extraction algorithm and the triplet-correspondence algorithm are faster and more robust for pose estimation. In addition, the retrieval time for a single subject has been significantly reduced. Furthermore, we discover that the investigated dental features are discriminative and useful for identification. The high accuracy, fast retrieval speed and the facilitated identification process suggest that the developed 3D framework is more suitable for practical use in dental biometrics applications in the future. Finally, the limitations and future research directions are discussed.
提出了一种基于牙石膏匹配的三维牙生物识别框架。使用3D技术克服了许多困扰2D方法的关键问题。据我们所知,我们的研究是首次尝试3D牙科生物识别技术。它包括用于提取姿态不变特征点的多尺度特征提取算法和用于姿态估计的三重对应算法。初步实验结果通过将7个死后(PM)样本与100个死前(AM)样本进行匹配,达到100%的rank-1准确率。此外,在全自动化的3D牙齿识别测试中,1级精度达到71.4%,4级精度达到100%。与现有的姿态估计算法相比,特征点提取算法和三联体对应算法速度更快,鲁棒性更强。此外,单个主题的检索时间也显著减少。此外,我们发现所调查的牙齿特征是有区别的,有助于识别。精度高,检索速度快,识别过程方便,表明所开发的三维框架更适合未来在牙科生物识别应用中的实际应用。最后,对研究的局限性和未来的研究方向进行了展望。
{"title":"Towards automated pose invariant 3D dental biometrics","authors":"Xin Zhong, Deping Yu, K. Foong, T. Sim, Y. Wong, Ho-Lun Cheng","doi":"10.1109/IJCB.2011.6117541","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117541","url":null,"abstract":"A novel pose invariant 3D dental biometrics framework is proposed for human identification by matching dental plasters in this paper. Using 3D overcomes a number of key problems that plague 2D methods. As best as we can tell, our study is the first attempt at 3D dental biometrics. It includes a multi-scale feature extraction algorithm for extracting pose invariant feature points and a triplet-correspondence algorithm for pose estimation. Preliminary experimental result achieves 100% rank-1 accuracy by matching 7 postmortem (PM) samples against 100 ante-mortem (AM) samples. In addition, towards a fully automated 3D dental identification testing, the accuracy achieves 71.4% at rank-1 accuracy and 100% at rank-4 accuracy. Comparing with the existing algorithms, the feature point extraction algorithm and the triplet-correspondence algorithm are faster and more robust for pose estimation. In addition, the retrieval time for a single subject has been significantly reduced. Furthermore, we discover that the investigated dental features are discriminative and useful for identification. The high accuracy, fast retrieval speed and the facilitated identification process suggest that the developed 3D framework is more suitable for practical use in dental biometrics applications in the future. Finally, the limitations and future research directions are discussed.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126959126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2011 International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1