首页 > 最新文献

2011 International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Two faces are better than one: Face recognition in group photographs 两张脸比一张脸好:集体照片中的人脸识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117516
O. K. Manyam, Neeraj Kumar, P. Belhumeur, D. Kriegman
Face recognition systems classically recognize people individually. When presented with a group photograph containing multiple people, such systems implicitly assume statistical independence between each detected face. We question this basic assumption and consider instead that there is a dependence between face regions from the same image; after all, the image was acquired with a single camera, under consistent lighting (distribution, direction, spectrum), camera motion, and scene/camera geometry. Such naturally occurring commonalities between face images can be exploited when recognition decisions are made jointly across the faces, rather than independently. Furthermore, when recognizing people in isolation, some features such as color are usually uninformative in unconstrained settings. But by considering pairs of people, the relative color difference provides valuable information. This paper reconsiders the independence assumption, introduces new features and methods for recognizing pairs of individuals in group photographs, and demonstrates a marked improvement when these features are used in joint decision making vs. independent decision making. While these features alone are only moderately discriminative, we combine these new features with state-of-art attribute features and demonstrate effective recognition performance. Initial experiments on two datasets show promising improvements in accuracy.
人脸识别系统通常是逐个识别人。当呈现一张包含多人的集体照片时,这种系统隐含地假设每个检测到的人脸之间的统计独立性。我们质疑这一基本假设,并认为来自同一图像的人脸区域之间存在依赖关系;毕竟,图像是在一致的光照(分布、方向、光谱)、相机运动和场景/相机几何形状下用单个相机获得的。当识别决策是在人脸之间共同做出的,而不是单独做出的时候,可以利用人脸图像之间自然存在的共性。此外,在识别孤立的人时,在不受约束的情况下,一些特征(如颜色)通常是没有信息的。但是通过考虑成对的人,相对颜色差异提供了有价值的信息。本文重新考虑了独立性假设,引入了新的特征和方法来识别群体照片中的个体对,并证明了在联合决策中使用这些特征比在独立决策中使用这些特征有显著的改进。虽然这些特征本身只是适度的判别,但我们将这些新特征与最先进的属性特征结合起来,并展示了有效的识别性能。在两个数据集上进行的初步实验显示,准确度有了很大的提高。
{"title":"Two faces are better than one: Face recognition in group photographs","authors":"O. K. Manyam, Neeraj Kumar, P. Belhumeur, D. Kriegman","doi":"10.1109/IJCB.2011.6117516","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117516","url":null,"abstract":"Face recognition systems classically recognize people individually. When presented with a group photograph containing multiple people, such systems implicitly assume statistical independence between each detected face. We question this basic assumption and consider instead that there is a dependence between face regions from the same image; after all, the image was acquired with a single camera, under consistent lighting (distribution, direction, spectrum), camera motion, and scene/camera geometry. Such naturally occurring commonalities between face images can be exploited when recognition decisions are made jointly across the faces, rather than independently. Furthermore, when recognizing people in isolation, some features such as color are usually uninformative in unconstrained settings. But by considering pairs of people, the relative color difference provides valuable information. This paper reconsiders the independence assumption, introduces new features and methods for recognizing pairs of individuals in group photographs, and demonstrates a marked improvement when these features are used in joint decision making vs. independent decision making. While these features alone are only moderately discriminative, we combine these new features with state-of-art attribute features and demonstrate effective recognition performance. Initial experiments on two datasets show promising improvements in accuracy.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125100639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Face recognition in low-resolution videos using learning-based likelihood measurement model 基于学习的似然测量模型在低分辨率视频中的人脸识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117514
S. Biswas, G. Aggarwal, P. Flynn
Low-resolution surveillance videos with uncontrolled pose and illumination present a significant challenge to both face tracking and recognition algorithms. Considerable appearance difference between the probe videos and high-resolution controlled images in the gallery acquired during enrollment makes the problem even harden In this paper, we extend the simultaneous tracking and recognition framework [22] to address the problem of matching high-resolution gallery images with surveillance quality probe videos. We propose using a learning-based likelihood measurement model to handle the large appearance and resolution difference between the gallery images and probe videos. The measurement model consists of a mapping which transforms the gallery and probe features to a space in which their inter-Euclidean distances approximate the distances that would have been obtained had all the descriptors been computed from good quality frontal images. Experimental results on real surveillance quality videos and comparisons with related approaches show the effectiveness of the proposed framework.
姿态和光照不受控制的低分辨率监控视频对人脸跟踪和识别算法提出了重大挑战。在登记过程中获取的图库中探针视频与高分辨率控制图像之间的巨大外观差异使得问题更加严峻。本文扩展了同步跟踪和识别框架[22],以解决高分辨率图库图像与监控质量探针视频的匹配问题。我们建议使用基于学习的似然度量模型来处理图库图像和探针视频之间的巨大外观和分辨率差异。测量模型由一个映射组成,该映射将画廊和探头特征转换为一个空间,其中它们的欧几里得距离近似于从高质量的正面图像中计算所有描述符所获得的距离。在真实监控视频上的实验结果以及与相关方法的比较表明了该框架的有效性。
{"title":"Face recognition in low-resolution videos using learning-based likelihood measurement model","authors":"S. Biswas, G. Aggarwal, P. Flynn","doi":"10.1109/IJCB.2011.6117514","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117514","url":null,"abstract":"Low-resolution surveillance videos with uncontrolled pose and illumination present a significant challenge to both face tracking and recognition algorithms. Considerable appearance difference between the probe videos and high-resolution controlled images in the gallery acquired during enrollment makes the problem even harden In this paper, we extend the simultaneous tracking and recognition framework [22] to address the problem of matching high-resolution gallery images with surveillance quality probe videos. We propose using a learning-based likelihood measurement model to handle the large appearance and resolution difference between the gallery images and probe videos. The measurement model consists of a mapping which transforms the gallery and probe features to a space in which their inter-Euclidean distances approximate the distances that would have been obtained had all the descriptors been computed from good quality frontal images. Experimental results on real surveillance quality videos and comparisons with related approaches show the effectiveness of the proposed framework.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115044519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Evaluation of gender classification methods on thermal and near-infrared face images 热、近红外人脸图像性别分类方法评价
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117544
Cunjian Chen, A. Ross
Automatic gender classification based on face images is receiving increased attention in the biometrics community. Most gender classification systems have been evaluated only on face images captured in the visible spectrum. In this work, the possibility of deducing gender from face images obtained in the near-infrared (NIR) and thermal (THM) spectra is established. It is observed that the use of local binary pattern histogram (LBPH) features along with discriminative classifiers results in reasonable gender classification accuracy in both the NIR and THM spectra. Further, the performance of human subjects in classifying thermal face images is studied. Experiments suggest that machine-learning methods are better suited than humans for gender classification from face images in the thermal spectrum.
基于人脸图像的自动性别分类在生物识别领域受到越来越多的关注。大多数性别分类系统仅在可见光谱中捕获的面部图像上进行评估。在这项工作中,建立了从近红外(NIR)和热(THM)光谱中获得的人脸图像推断性别的可能性。结果表明,利用局部二值模式直方图(LBPH)特征和判别分类器对近红外光谱和THM光谱的性别分类均有较好的准确率。进一步,研究了人类受试者对热人脸图像的分类性能。实验表明,机器学习方法比人类更适合从热光谱的人脸图像中进行性别分类。
{"title":"Evaluation of gender classification methods on thermal and near-infrared face images","authors":"Cunjian Chen, A. Ross","doi":"10.1109/IJCB.2011.6117544","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117544","url":null,"abstract":"Automatic gender classification based on face images is receiving increased attention in the biometrics community. Most gender classification systems have been evaluated only on face images captured in the visible spectrum. In this work, the possibility of deducing gender from face images obtained in the near-infrared (NIR) and thermal (THM) spectra is established. It is observed that the use of local binary pattern histogram (LBPH) features along with discriminative classifiers results in reasonable gender classification accuracy in both the NIR and THM spectra. Further, the performance of human subjects in classifying thermal face images is studied. Experiments suggest that machine-learning methods are better suited than humans for gender classification from face images in the thermal spectrum.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121308420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
On matching latent to latent fingerprints 潜在指纹与潜在指纹的匹配
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117525
A. Sankaran, Tejas I. Dhamecha, Mayank Vatsa, Richa Singh
This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are presented to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.
本研究提出了一种匹配两个潜在指纹的法医应用。在犯罪现场设置中,通常需要匹配多个潜在指纹。与墨迹指纹或活指纹的匹配不同,这一研究问题非常具有挑战性,需要适当的分析和关注。本文的贡献有三个方面:(i)对该应用的现有算法进行了比较分析,(ii)提出了融合和上下文切换框架以提高识别性能,以及(iii)准备了一个多潜指纹数据库。实验强调了改进特征提取和处理方法的必要性,并在这一重要的研究问题上展示了很大的改进范围。
{"title":"On matching latent to latent fingerprints","authors":"A. Sankaran, Tejas I. Dhamecha, Mayank Vatsa, Richa Singh","doi":"10.1109/IJCB.2011.6117525","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117525","url":null,"abstract":"This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are presented to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123414316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Study on the BeiHang Keystroke Dynamics Database 北航击键动力学数据库的研究
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117485
Yilin Li, Baochang Zhang, Yao Cao, Sanqiang Zhao, Yongsheng Gao, Jianzhuang Liu
This paper introduces a new BeiHang (BH) Keystroke Dynamics Database for testing and evaluation of biometric approaches. Different from the existing keystroke dynamics researches which solely rely on laboratory experiments, the developed database is collected from a real commercialized system and thus is more comprehensive and more faithful to human behavior. Moreover, our database comes with ready-to-use benchmark results of three keystroke dynamics methods, Nearest Neighbor classifier, Gaussian Model and One-Class Support Vector Machine. Both the database and benchmark results are open to the public and provide a significant experimental platform for international researchers in the keystroke dynamics area.
本文介绍了一个新的北京航空航天公司按键动力学数据库,用于测试和评估生物识别方法。与现有的按键动力学研究仅依靠实验室实验不同,所开发的数据库来自于真实的商业化系统,因此更全面,更忠实于人类行为。此外,我们的数据库附带了三种击键动力学方法,最近邻分类器,高斯模型和一类支持向量机的现成基准测试结果。数据库和基准测试结果均向公众开放,为国际上的击键动力学研究人员提供了一个重要的实验平台。
{"title":"Study on the BeiHang Keystroke Dynamics Database","authors":"Yilin Li, Baochang Zhang, Yao Cao, Sanqiang Zhao, Yongsheng Gao, Jianzhuang Liu","doi":"10.1109/IJCB.2011.6117485","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117485","url":null,"abstract":"This paper introduces a new BeiHang (BH) Keystroke Dynamics Database for testing and evaluation of biometric approaches. Different from the existing keystroke dynamics researches which solely rely on laboratory experiments, the developed database is collected from a real commercialized system and thus is more comprehensive and more faithful to human behavior. Moreover, our database comes with ready-to-use benchmark results of three keystroke dynamics methods, Nearest Neighbor classifier, Gaussian Model and One-Class Support Vector Machine. Both the database and benchmark results are open to the public and provide a significant experimental platform for international researchers in the keystroke dynamics area.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125020835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Score-level fusion based on the direct estimation of the Bayes error gradient distribution 基于直接估计贝叶斯误差梯度分布的分数级融合
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117532
Yasushi Makihara, D. Muramatsu, Y. Yagi, Md. Altab Hossain
This paper describes a method of score-level fusion to optimize a Receiver Operating Characteristic (ROC) curve for multimodal biometrics. When the Probability Density Functions (PDFs) of the multimodal scores for each client and imposter are obtained from the training samples, it is well known that the isolines of a function of probabilistic densities, such as the likelihood ratio, posterior, or Bayes error gradient, give the optimal ROC curve. The success of the probability density-based methods depends on the PDF estimation for each client and imposter, which still remains a challenging problem. Therefore, we introduce a framework of direct estimation of the Bayes error gradient that bypasses the troublesome PDF estimation for each client and imposter. The lattice-type control points are allocated in a multiple score space, and the Bayes error gradients on the control points are then estimated in a comprehensive manner in the energy minimization framework including not only the data fitness of the training samples but also the boundary conditions and monotonic increase constraints to suppress the over-training. The experimental results for both simulation and real public data show the effectiveness of the proposed method.
本文介绍了一种分数级融合的方法来优化多模态生物识别的受试者工作特征(ROC)曲线。当从训练样本中获得每个客户和冒名顶替者的多模态分数的概率密度函数(pdf)时,众所周知,概率密度函数的等值线,如似然比、后验或贝叶斯误差梯度,会给出最佳的ROC曲线。基于概率密度的方法的成功与否取决于每个客户端和冒名顶替者的PDF估计,这仍然是一个具有挑战性的问题。因此,我们引入了一个直接估计贝叶斯误差梯度的框架,该框架绕过了对每个客户端和冒名者的麻烦的PDF估计。在多个分数空间中分配格型控制点,然后在能量最小化框架中综合估计控制点上的贝叶斯误差梯度,该框架不仅包括训练样本的数据适应度,还包括边界条件和单调递增约束,以抑制过度训练。仿真和实际公开数据的实验结果表明了该方法的有效性。
{"title":"Score-level fusion based on the direct estimation of the Bayes error gradient distribution","authors":"Yasushi Makihara, D. Muramatsu, Y. Yagi, Md. Altab Hossain","doi":"10.1109/IJCB.2011.6117532","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117532","url":null,"abstract":"This paper describes a method of score-level fusion to optimize a Receiver Operating Characteristic (ROC) curve for multimodal biometrics. When the Probability Density Functions (PDFs) of the multimodal scores for each client and imposter are obtained from the training samples, it is well known that the isolines of a function of probabilistic densities, such as the likelihood ratio, posterior, or Bayes error gradient, give the optimal ROC curve. The success of the probability density-based methods depends on the PDF estimation for each client and imposter, which still remains a challenging problem. Therefore, we introduce a framework of direct estimation of the Bayes error gradient that bypasses the troublesome PDF estimation for each client and imposter. The lattice-type control points are allocated in a multiple score space, and the Bayes error gradients on the control points are then estimated in a comprehensive manner in the energy minimization framework including not only the data fitness of the training samples but also the boundary conditions and monotonic increase constraints to suppress the over-training. The experimental results for both simulation and real public data show the effectiveness of the proposed method.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128378777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Speech cryptographic key regeneration based on password 基于密码的语音密码密钥再生
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117553
K. Inthavisas, D. Lopresti
In this paper, we propose a way to combine a password with a speech biometric cryptosystem. We present two schemes to enhance verification performance in a biometric cryptosystem using password. Both can resist a password brute-force search if biometrics are not compromised. Even if the biometrics are compromised, attackers have to spend many more attempts in searching for cryptographic keys when we compare ours with a traditional password-based approach. In addition, the experimental results show that the verification performance is significantly improved.
在本文中,我们提出了一种将密码与语音生物识别密码系统相结合的方法。提出了两种利用密码提高生物特征密码系统验证性能的方案。如果生物识别技术没有受到损害,两者都可以抵御密码暴力搜索。即使生物识别技术被破坏,当我们将我们的方法与传统的基于密码的方法进行比较时,攻击者也不得不花费更多的尝试来搜索加密密钥。此外,实验结果表明,该算法的验证性能得到了显著提高。
{"title":"Speech cryptographic key regeneration based on password","authors":"K. Inthavisas, D. Lopresti","doi":"10.1109/IJCB.2011.6117553","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117553","url":null,"abstract":"In this paper, we propose a way to combine a password with a speech biometric cryptosystem. We present two schemes to enhance verification performance in a biometric cryptosystem using password. Both can resist a password brute-force search if biometrics are not compromised. Even if the biometrics are compromised, attackers have to spend many more attempts in searching for cryptographic keys when we compare ours with a traditional password-based approach. In addition, the experimental results show that the verification performance is significantly improved.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128555340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition 学习编码面部常态信息的加权稀疏表示,用于表情鲁棒性三维人脸识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117555
Huibin Li, Di Huang, J. Morvan, Liming Chen
This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition.
本文提出了一种新的三维人脸识别方法,该方法通过学习编码人脸法向信息的加权稀疏表示来实现。为了全面描述三维人脸表面,分别在X、Y、z平面上对法向量的三个分量进行局部编码,得到相应的法向模式直方图。最后将它们馈送到基于学习的空间权重增强的稀疏表示分类器中。在FRGC v2.0数据库上的实验结果表明,本文提出的编码标准信息比原始标准信息具有更强的鉴别能力。此外,使用FRGC v1.0和Bosphorus数据集学习的基于补丁的权重也证明了每个面部物理成分对3D人脸识别的重要性。
{"title":"Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition","authors":"Huibin Li, Di Huang, J. Morvan, Liming Chen","doi":"10.1109/IJCB.2011.6117555","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117555","url":null,"abstract":"This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128692460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Mining patterns of orientations and magnitudes for face recognition 面向人脸识别的方向和大小模式挖掘
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117538
Ngoc-Son Vu, A. Caplier
Good face recognition system is one which quickly delivers high accurate results to the end user. For this purpose, face representation must be robust, discriminative and also of low computational cost in both terms of time and space. Inspired by recently proposed feature set so-called POEM (Patterns of Oriented Edge Magnitudes) which considers the relationships between edge distributions of different image patches and is argued balancing well the three concerns, this work proposes to further exploit patterns of both orientations and magnitudes for building more efficient algorithm. We first present novel features called Patterns of Dominant Orientations (PDO) which consider the relationships between “dominant” orientations of local image regions at different scales. We also propose to apply the whitened PCA technique upon both the POEM and PDO based representations to get more compact and discriminative face descriptors. We then show that the two methods have complementary strength and that by combining the two descriptors, one obtains stronger results than either of them considered separately. By experiments carried out on several common benchmarks, including both frontal and non-frontal FERET as well as the AR datasets, we prove that our approach is more efficient than contemporary ones.
好的人脸识别系统是一个能够快速向最终用户提供高精度结果的系统。为此,人脸表征必须具有鲁棒性、判别性,并且在时间和空间上都具有较低的计算成本。受最近提出的所谓POEM (Patterns of Oriented Edge Magnitudes)特征集的启发,该特征集考虑了不同图像补丁边缘分布之间的关系,并认为这三个关注点很好地平衡了,本工作提出进一步利用方向和大小的模式来构建更有效的算法。我们首先提出了一种新的特征,称为优势取向模式(PDO),它考虑了不同尺度下局部图像区域的“优势”取向之间的关系。我们还建议将白化PCA技术应用于基于POEM和PDO的表示,以获得更紧凑和判别性更好的人脸描述符。然后,我们证明这两种方法具有互补的强度,并且通过结合两个描述符,可以获得比单独考虑的任何一种更强的结果。通过在几个常见基准上进行的实验,包括正面和非正面FERET以及AR数据集,我们证明了我们的方法比现代方法更有效。
{"title":"Mining patterns of orientations and magnitudes for face recognition","authors":"Ngoc-Son Vu, A. Caplier","doi":"10.1109/IJCB.2011.6117538","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117538","url":null,"abstract":"Good face recognition system is one which quickly delivers high accurate results to the end user. For this purpose, face representation must be robust, discriminative and also of low computational cost in both terms of time and space. Inspired by recently proposed feature set so-called POEM (Patterns of Oriented Edge Magnitudes) which considers the relationships between edge distributions of different image patches and is argued balancing well the three concerns, this work proposes to further exploit patterns of both orientations and magnitudes for building more efficient algorithm. We first present novel features called Patterns of Dominant Orientations (PDO) which consider the relationships between “dominant” orientations of local image regions at different scales. We also propose to apply the whitened PCA technique upon both the POEM and PDO based representations to get more compact and discriminative face descriptors. We then show that the two methods have complementary strength and that by combining the two descriptors, one obtains stronger results than either of them considered separately. By experiments carried out on several common benchmarks, including both frontal and non-frontal FERET as well as the AR datasets, we prove that our approach is more efficient than contemporary ones.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125936443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Eyebrow shape-based features for biometric recognition and gender classification: A feasibility study 基于眉形特征的生物特征识别与性别分类的可行性研究
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117511
Yujie Dong, D. Woodard
A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification.
法医、政府和商业领域的各种应用都需要可靠的个人身份识别。然而,当遇到由运动模糊、对比度差、各种表情或照明伪影引起的非理想图像时,识别性能会受到严重影响。在本文中,我们研究了在非理想成像条件下使用基于形状的眉毛特征进行生物识别和性别分类。我们从眉毛图像中提取各种基于形状的特征,并比较了三种不同的分类方法:最小距离分类器(MD)、线性判别分析分类器(LDA)和支持向量机分类器(SVM)。这些方法在两个公开可用的面部图像数据库中进行了测试:多重生物识别大挑战(MBGC)数据库和面部识别大挑战(FRGC)数据库。MBGC和FRGC的识别率分别为90%和75%,性别分类识别率分别为96%和97%,表明基于形状的眉毛特征可用于生物特征识别和软生物特征分类。
{"title":"Eyebrow shape-based features for biometric recognition and gender classification: A feasibility study","authors":"Yujie Dong, D. Woodard","doi":"10.1109/IJCB.2011.6117511","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117511","url":null,"abstract":"A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
期刊
2011 International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1