首页 > 最新文献

2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems最新文献

英文 中文
Human identification using KnuckleCodes 使用KnuckleCodes进行人体识别
Ajay Kumar, Yingbo Zhou
The usage of finger knuckle images for personal identification has shown promising results and generated lot of interest in biometrics. In this work, we investigate a new approach for efficient and effective personal identification using KnuckleCodes. The enhanced knuckle images are employed to generate KnuckleCodes using localized Radon transform that can efficiently characterize random curved lines and creases. The similarity between two KnuckleCodes is computed from the minimum matching distance that can account for the variations resulting from translation and positioning of fingers. The feasibility of the proposed approach is investigated on the finger knuckle database from 158 subjects. The experimental results, i.e., equal error rate of 1.08% and rank one recognition rate of 98.6%, suggest the utility of the proposed approach for online human identification.
使用指关节图像进行个人识别已经显示出有希望的结果,并对生物识别产生了很大的兴趣。在这项工作中,我们研究了一种使用KnuckleCodes进行高效个人识别的新方法。利用增强后的关节图像,利用局部Radon变换生成能有效表征随机曲线和折痕的KnuckleCodes。两个KnuckleCodes之间的相似性是根据最小匹配距离计算的,该距离可以解释手指的平移和定位引起的变化。在158名受试者的指关节数据库上研究了该方法的可行性。实验结果表明,该方法的等错误率为1.08%,一级识别率为98.6%,证明了该方法在在线人脸识别中的实用性。
{"title":"Human identification using KnuckleCodes","authors":"Ajay Kumar, Yingbo Zhou","doi":"10.1109/BTAS.2009.5339021","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339021","url":null,"abstract":"The usage of finger knuckle images for personal identification has shown promising results and generated lot of interest in biometrics. In this work, we investigate a new approach for efficient and effective personal identification using KnuckleCodes. The enhanced knuckle images are employed to generate KnuckleCodes using localized Radon transform that can efficiently characterize random curved lines and creases. The similarity between two KnuckleCodes is computed from the minimum matching distance that can account for the variations resulting from translation and positioning of fingers. The feasibility of the proposed approach is investigated on the finger knuckle database from 158 subjects. The experimental results, i.e., equal error rate of 1.08% and rank one recognition rate of 98.6%, suggest the utility of the proposed approach for online human identification.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117312117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Canonical Stiefel Quotient and its application to generic face recognition in illumination spaces 正则Stiefel商及其在照明空间中人脸识别中的应用
Y. Lui, J. Beveridge, M. Kirby
This paper presents a new paradigm for face recognition in illumination spaces when the identities of training subjects and test subjects do not overlap. Previous methods employ illumination models to create a projector from an illumination basis and perform single image classification. In contrast, we apply an illumination model to an image and create a set of illumination variants. For a gallery image, these variants are expressed as a point on a Stiefel manifold with an associated tangent plane. Two projections of the probe image illumination variants onto this tangent plane are defined and the ratio between these two projections, called the Canonical Stiefel Quotient (CSQ), is a measure of distance between images. We show that the proposed CSQ paradigm not only outperforms the traditional single image matching approach but also other variants of image set matching including a geodesic method. Furthermore, the proposed CSQ method is robust to the choice of training sets. Finally, our analyses reveal the benefits of using image set classification over single image matching.
本文提出了一种光照空间下训练对象和测试对象身份不重叠的人脸识别新范式。以前的方法使用照明模型从照明基础上创建投影仪并执行单个图像分类。相反,我们将照明模型应用于图像并创建一组照明变量。画廊的形象,这些变量表示为施蒂费尔流形上的点有一个关联的切平面。探针图像照明变量在切平面上的两个投影被定义,这两个投影之间的比值称为正则Stiefel商(CSQ),是图像之间距离的度量。我们表明,所提出的CSQ范式不仅优于传统的单图像匹配方法,而且优于包括测地线方法在内的其他图像集匹配变体。此外,所提出的CSQ方法对训练集的选择具有鲁棒性。最后,我们的分析揭示了使用图像集分类比单个图像匹配的好处。
{"title":"Canonical Stiefel Quotient and its application to generic face recognition in illumination spaces","authors":"Y. Lui, J. Beveridge, M. Kirby","doi":"10.1109/BTAS.2009.5339026","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339026","url":null,"abstract":"This paper presents a new paradigm for face recognition in illumination spaces when the identities of training subjects and test subjects do not overlap. Previous methods employ illumination models to create a projector from an illumination basis and perform single image classification. In contrast, we apply an illumination model to an image and create a set of illumination variants. For a gallery image, these variants are expressed as a point on a Stiefel manifold with an associated tangent plane. Two projections of the probe image illumination variants onto this tangent plane are defined and the ratio between these two projections, called the Canonical Stiefel Quotient (CSQ), is a measure of distance between images. We show that the proposed CSQ paradigm not only outperforms the traditional single image matching approach but also other variants of image set matching including a geodesic method. Furthermore, the proposed CSQ method is robust to the choice of training sets. Finally, our analyses reveal the benefits of using image set classification over single image matching.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123228884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Unconstrained face recognition using MRF priors and manifold traversing 基于MRF先验和流形遍历的无约束人脸识别
R. N. Rodrigues, Greyce N. Schroeder, Jason J. Corso, V. Govindaraju
In this paper, we explore new methods to improve the modeling of facial images under different types of variations like pose, ambient illumination and facial expression. We investigate the intuitive assumption that the parameters for the distribution of facial images change smoothly with respect to variations in the face pose angle. A Markov Random Field is defined to model a smooth prior over the parameter space and the maximum a posteriori solution is computed. We also propose extensions to the view-based face recognition method by learning how to traverse between different subspaces so we can synthesize facial images with different characteristics for the same person. This allow us to enroll a new user with a single 2D image.
在本文中,我们探索了新的方法来改进面部图像在不同类型的变化,如姿势,环境光照和面部表情下的建模。我们研究了一个直观的假设,即面部图像分布的参数随着面部姿态角度的变化而平滑变化。定义了一个马尔可夫随机场来模拟参数空间上的平滑先验,并计算了最大后验解。我们还提出了基于视图的人脸识别方法的扩展,通过学习如何在不同的子空间之间遍历,从而可以合成同一个人具有不同特征的人脸图像。这允许我们使用单个2D图像注册新用户。
{"title":"Unconstrained face recognition using MRF priors and manifold traversing","authors":"R. N. Rodrigues, Greyce N. Schroeder, Jason J. Corso, V. Govindaraju","doi":"10.1109/BTAS.2009.5339080","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339080","url":null,"abstract":"In this paper, we explore new methods to improve the modeling of facial images under different types of variations like pose, ambient illumination and facial expression. We investigate the intuitive assumption that the parameters for the distribution of facial images change smoothly with respect to variations in the face pose angle. A Markov Random Field is defined to model a smooth prior over the parameter space and the maximum a posteriori solution is computed. We also propose extensions to the view-based face recognition method by learning how to traverse between different subspaces so we can synthesize facial images with different characteristics for the same person. This allow us to enroll a new user with a single 2D image.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127539835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Difficult detection: A comparison of two different approaches to eye detection for unconstrained environments 难以检测:比较两种不同的眼睛检测方法在无约束环境
W. Scheirer, A. Rocha, B. Heflin, T. Boult
Eye detection is a well studied problem for the constrained face recognition problem, where we find controlled distances, lighting, and limited pose variation. A far more difficult scenario for eye detection is the unconstrained face recognition problem, where we do not have any control over the environment or the subject. In this paper, we take a look at two different approaches for eye detection under difficult acquisition circumstances, including low-light, distance, pose variation, and blur. A new machine learning approach and several correlation filter approaches, including a new adaptive variant, are compared. We present experimental results on a variety of controlled data sets (derived from FERET and CMU PIE) that have been re-imaged under the difficult conditions of interest with an EMCCD based acquisition system. The results of our experiments show that our new detection approaches are extremely accurate under all tested conditions, and significantly improve detection accuracy compared to a leading commercial detector. This unique evaluation brings us one step closer to a better solution for the unconstrained face recognition problem.
眼睛检测是约束人脸识别问题的一个很好的研究问题,我们发现控制距离,照明和有限的姿态变化。眼睛检测的一个更困难的场景是无约束的人脸识别问题,在这种情况下,我们对环境或对象没有任何控制。在本文中,我们研究了两种不同的眼睛检测方法,包括低光、距离、姿态变化和模糊。比较了一种新的机器学习方法和几种相关滤波方法,包括一种新的自适应变量。我们展示了各种受控数据集(来自FERET和CMU PIE)的实验结果,这些数据集已经在基于EMCCD的采集系统的困难条件下重新成像。我们的实验结果表明,我们的新检测方法在所有测试条件下都非常准确,并且与领先的商用检测器相比显着提高了检测精度。这种独特的评价使我们离无约束人脸识别问题的更好解决方案又近了一步。
{"title":"Difficult detection: A comparison of two different approaches to eye detection for unconstrained environments","authors":"W. Scheirer, A. Rocha, B. Heflin, T. Boult","doi":"10.1109/BTAS.2009.5339040","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339040","url":null,"abstract":"Eye detection is a well studied problem for the constrained face recognition problem, where we find controlled distances, lighting, and limited pose variation. A far more difficult scenario for eye detection is the unconstrained face recognition problem, where we do not have any control over the environment or the subject. In this paper, we take a look at two different approaches for eye detection under difficult acquisition circumstances, including low-light, distance, pose variation, and blur. A new machine learning approach and several correlation filter approaches, including a new adaptive variant, are compared. We present experimental results on a variety of controlled data sets (derived from FERET and CMU PIE) that have been re-imaged under the difficult conditions of interest with an EMCCD based acquisition system. The results of our experiments show that our new detection approaches are extremely accurate under all tested conditions, and significantly improve detection accuracy compared to a leading commercial detector. This unique evaluation brings us one step closer to a better solution for the unconstrained face recognition problem.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Biometric authentication using augmented face and random projection 使用增强面部和随机投影的生物识别认证
Hosik Sohn, Yong Man Ro, K. Plataniotis
In this paper, we propose a revocable and privacy preserving template of face biometrics based on random projection. The face biometric is augmented and simultaneously projected onto random subspace. The face image vector is augmented by adding the vector whose elements are varying with zero mean. The augmented face vector can provide better accuracy of authentication and privacy preservation. We analyze the similarity, privacy preserving and security properties of the proposed augmented face biometric information in the random projection-domain. To demonstrate the feasibility of the proposed method, detailed theoretical analysis and several experimental results are provided. The results show that our method is able to provide revocability and privacy preservation while offering better authentication accuracy and security.
本文提出了一种基于随机投影的可撤销且隐私保护的人脸生物特征模板。人脸生物特征被增强并同时投射到随机子空间。通过添加元素均值为零的向量对人脸图像向量进行增广。增强的人脸向量可以提供更好的身份验证精度和隐私保护。我们在随机投影域中分析了所提出的增强人脸生物特征信息的相似性、隐私性和安全性。为了证明该方法的可行性,给出了详细的理论分析和几个实验结果。结果表明,该方法在提供更高的认证准确性和安全性的同时,能够提供可撤销性和隐私保护。
{"title":"Biometric authentication using augmented face and random projection","authors":"Hosik Sohn, Yong Man Ro, K. Plataniotis","doi":"10.1109/BTAS.2009.5339014","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339014","url":null,"abstract":"In this paper, we propose a revocable and privacy preserving template of face biometrics based on random projection. The face biometric is augmented and simultaneously projected onto random subspace. The face image vector is augmented by adding the vector whose elements are varying with zero mean. The augmented face vector can provide better accuracy of authentication and privacy preservation. We analyze the similarity, privacy preserving and security properties of the proposed augmented face biometric information in the random projection-domain. To demonstrate the feasibility of the proposed method, detailed theoretical analysis and several experimental results are provided. The results show that our method is able to provide revocability and privacy preservation while offering better authentication accuracy and security.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126063406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A computational efficient iris extraction approach in unconstrained environments 无约束环境下计算高效虹膜提取方法
Yu Chen, M. Adjouadi, A. Barreto, N. Rishe, J. Andrian
This research introduces a noise-resistant and computational efficient segmentation approach towards less constrained iris recognition. The UBIRIS.v2 database which contains close-up eye images taken under visible light is used to test the proposed algorithm. The proposed segmentation approach is based on a modified and fast Hough transform augmented with a newly developed strategy to define iris boundaries with multi-arcs and multi-lines. This optimized iris segmentation approach achieves excellent results in both accuracy (2% error) and execution speed (≤0.5s / image) using a 2.4GHz Intel® Q6600 processor with 2GB of RAM. This 2% error is an Exclusive-OR function in term of disagreeing pixels between the correct iris considered by the NICE.I committee and the segmented results from the proposed approach. The segmentation performance was independently evaluated in the “Noisy Iris Challenge Evaluation”, involving 97 participants worldwide, and ranking this research group in the top 6.
本研究引入了一种抗噪声和计算效率高的分割方法,用于较少约束的虹膜识别。UBIRIS。使用V2数据库对该算法进行了测试,该数据库包含了人眼在可见光下拍摄的特写图像。该分割方法基于改进的快速霍夫变换,增强了一种新的多弧和多线虹膜边界定义策略。这种优化的虹膜分割方法使用2.4GHz Intel®Q6600处理器和2GB RAM,在准确率(2%误差)和执行速度(≤0.5s /图像)方面都取得了优异的效果。这2%的误差是一个异或函数,就NICE所考虑的正确虹膜之间的不一致像素而言。委员会和拟议方法的分段结果。在“嘈杂虹膜挑战评估”中独立评估分割性能,涉及全球97名参与者,该课题组排名前6。
{"title":"A computational efficient iris extraction approach in unconstrained environments","authors":"Yu Chen, M. Adjouadi, A. Barreto, N. Rishe, J. Andrian","doi":"10.1109/BTAS.2009.5339024","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339024","url":null,"abstract":"This research introduces a noise-resistant and computational efficient segmentation approach towards less constrained iris recognition. The UBIRIS.v2 database which contains close-up eye images taken under visible light is used to test the proposed algorithm. The proposed segmentation approach is based on a modified and fast Hough transform augmented with a newly developed strategy to define iris boundaries with multi-arcs and multi-lines. This optimized iris segmentation approach achieves excellent results in both accuracy (2% error) and execution speed (≤0.5s / image) using a 2.4GHz Intel® Q6600 processor with 2GB of RAM. This 2% error is an Exclusive-OR function in term of disagreeing pixels between the correct iris considered by the NICE.I committee and the segmented results from the proposed approach. The segmentation performance was independently evaluated in the “Noisy Iris Challenge Evaluation”, involving 97 participants worldwide, and ranking this research group in the top 6.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128167304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Fingerprint recognition performance in rugged outdoors and cold weather conditions 指纹识别性能在恶劣的户外和寒冷的天气条件
Ron F. Stewart, Matt Estevao, A. Adler
This paper reports on tests of the performance of fingerprint recognition technology in rugged outdoor conditions, with an especial concentration on the performance in cold weather. We analyze: 1) chip versus optical fingerprint scanner technology, 2) recognition performance and image quality, and 3) user/device interaction. A outdoor fingerprint door access system was designed to capture fingerprint images and video data of user interactions. Using this device, data were captured over a period of two years, and a user survey performed. Data were analyzed in terms of biometric error rates and fingerprint quality (NFIQ) as a function of temperature and humidity. Results suggest: 1) biometric performance has no significant dependence on temperature and humidity (-30C to +20C), 2) both chip based and optical fingerprint scanners have some flaws in rugged and cold weather applications, and 3) overall fingerprint biometric technology has a good level of usability in this application.
本文报道了指纹识别技术在恶劣室外条件下的性能测试,特别关注了在寒冷天气下的性能。我们分析:1)芯片与光学指纹扫描仪技术,2)识别性能和图像质量,以及3)用户/设备交互。设计了一种室外指纹门禁系统,用于采集指纹图像和用户交互视频数据。使用该设备,在两年的时间内收集数据,并进行用户调查。数据分析的生物识别错误率和指纹质量(NFIQ)作为温度和湿度的函数。结果表明:1)生物识别性能对温度和湿度(-30℃至+20℃)没有显著依赖;2)芯片指纹扫描仪和光学指纹扫描仪在恶劣和寒冷天气应用中都存在一些缺陷;3)整体指纹生物识别技术在这种应用中具有良好的可用性。
{"title":"Fingerprint recognition performance in rugged outdoors and cold weather conditions","authors":"Ron F. Stewart, Matt Estevao, A. Adler","doi":"10.1109/BTAS.2009.5339061","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339061","url":null,"abstract":"This paper reports on tests of the performance of fingerprint recognition technology in rugged outdoor conditions, with an especial concentration on the performance in cold weather. We analyze: 1) chip versus optical fingerprint scanner technology, 2) recognition performance and image quality, and 3) user/device interaction. A outdoor fingerprint door access system was designed to capture fingerprint images and video data of user interactions. Using this device, data were captured over a period of two years, and a user survey performed. Data were analyzed in terms of biometric error rates and fingerprint quality (NFIQ) as a function of temperature and humidity. Results suggest: 1) biometric performance has no significant dependence on temperature and humidity (-30C to +20C), 2) both chip based and optical fingerprint scanners have some flaws in rugged and cold weather applications, and 3) overall fingerprint biometric technology has a good level of usability in this application.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115070535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Point-pair descriptors for 3D facial landmark localisation 三维面部地标定位的点对描述符
M. Romero, Nick E. Pears
Our pose-invariant point-pair descriptors, which encode 3D shape between a pair of 3D points are described and evaluated. Two variants of descriptor are introduced, the first is the point-pair spin image, which is related to the classical spin image of Johnson and Hebert, and the second is derived from an implicit radial basis function (RBF) model of the facial surface. We call this a cylindrically sampled RBF (CSR) shape histogram. These descriptors can effectively encode edges in graph based representations of 3D shapes. Thus, they are useful in a wide range of 3D graph-based retrieval applications. Here we show how the descriptors are able to identify the nose-tip and the eye-corner of a human face simultaneously in six promising landmark localisation systems. We evaluate our approaches by computing root mean square errors of estimated landmark locations against our ground truth landmark localisations within the 3D Face Recognition Grand Challenge database.
描述和评估了位姿不变的点对描述子,该描述子在一对三维点之间编码三维形状。介绍了描述子的两种变体,一种是点对自旋图像,它与Johnson和Hebert的经典自旋图像有关,另一种是由面表面的隐式径向基函数(RBF)模型导出的。我们称之为圆柱采样RBF (CSR)形状直方图。这些描述符可以有效地在基于图形的3D形状表示中编码边缘。因此,它们在广泛的基于3D图形的检索应用程序中非常有用。在这里,我们展示了描述符如何能够在六个有前途的地标定位系统中同时识别人脸的鼻尖和眼角。我们通过在3D人脸识别大挑战数据库中计算估计地标位置的均方根误差来评估我们的方法。
{"title":"Point-pair descriptors for 3D facial landmark localisation","authors":"M. Romero, Nick E. Pears","doi":"10.1109/BTAS.2009.5339009","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339009","url":null,"abstract":"Our pose-invariant point-pair descriptors, which encode 3D shape between a pair of 3D points are described and evaluated. Two variants of descriptor are introduced, the first is the point-pair spin image, which is related to the classical spin image of Johnson and Hebert, and the second is derived from an implicit radial basis function (RBF) model of the facial surface. We call this a cylindrically sampled RBF (CSR) shape histogram. These descriptors can effectively encode edges in graph based representations of 3D shapes. Thus, they are useful in a wide range of 3D graph-based retrieval applications. Here we show how the descriptors are able to identify the nose-tip and the eye-corner of a human face simultaneously in six promising landmark localisation systems. We evaluate our approaches by computing root mean square errors of estimated landmark locations against our ground truth landmark localisations within the 3D Face Recognition Grand Challenge database.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115192114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Multi-algorithm fusion with template protection 多算法融合与模板保护
E. Kelkboom, X. Zhou, J. Breebaart, R. Veldhuis, C. Busch
The popularity of biometrics and its widespread use introduces privacy risks. To mitigate these risks, solutions such as the helper-data system, fuzzy vault, fuzzy extractors, and cancelable biometrics were introduced, also known as the field of template protection. In parallel to these developments, fusion of multiple sources of biometric information have shown to improve the verification performance of the biometric system. In this work we analyze fusion of the protected template from two 3D recognition algorithms (multi-algorithm fusion) at feature-, score-, and decision-level. We show that fusion can be applied at the known fusion-levels with the template protection technique known as the Helper-Data System. We also illustrate the required changes of the Helper-Data System and its corresponding limitations. Furthermore, our experimental results, based on 3D face range images of the FRGC v2 dataset, show that indeed fusion improves the verification performance.
生物识别技术的普及及其广泛使用带来了隐私风险。为了降低这些风险,引入了诸如辅助数据系统、模糊保险库、模糊提取器和可取消生物识别等解决方案,也称为模板保护领域。与此同时,融合多种来源的生物特征信息已被证明可以提高生物特征系统的验证性能。在这项工作中,我们分析了两种3D识别算法(多算法融合)在特征、分数和决策级别上的保护模板融合。我们表明,融合可以应用于已知的融合水平与模板保护技术被称为辅助数据系统。我们还说明了Helper-Data系统所需的更改及其相应的限制。此外,我们基于FRGC v2数据集的3D人脸范围图像的实验结果表明,融合确实提高了验证性能。
{"title":"Multi-algorithm fusion with template protection","authors":"E. Kelkboom, X. Zhou, J. Breebaart, R. Veldhuis, C. Busch","doi":"10.1109/BTAS.2009.5339045","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339045","url":null,"abstract":"The popularity of biometrics and its widespread use introduces privacy risks. To mitigate these risks, solutions such as the helper-data system, fuzzy vault, fuzzy extractors, and cancelable biometrics were introduced, also known as the field of template protection. In parallel to these developments, fusion of multiple sources of biometric information have shown to improve the verification performance of the biometric system. In this work we analyze fusion of the protected template from two 3D recognition algorithms (multi-algorithm fusion) at feature-, score-, and decision-level. We show that fusion can be applied at the known fusion-levels with the template protection technique known as the Helper-Data System. We also illustrate the required changes of the Helper-Data System and its corresponding limitations. Furthermore, our experimental results, based on 3D face range images of the FRGC v2 dataset, show that indeed fusion improves the verification performance.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114141102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Towards 3D-aided profile-based face recognition 走向3d辅助的基于侧面的人脸识别
B. Efraty, E. Ismailov, S. Shah, I. Kakadiaris
In this paper, we present a fully automatic system for face recognition based on a silhouette of the face profile. Previous research has demonstrated the high discriminative potential of this biometric. However, for the successful employment of this characteristic one is confronted with many challenges, such as the sensitivity of a profile's geometry to face rotation and the difficulty of accurate profile extraction from images. We propose to explore the feature space of profiles under various rotations with the aid of a 3D face model. In the enrollment mode, 3D data of subjects are acquired and used to create profiles under different rotations. The features extracted from these profiles are used to train a classifier. In the identification mode, the profiles are extracted from side view images using a modified Active Shape Model approach. We validate the accuracy of the extractor and the robustness of classification algorithms using data from a publicly available database.
本文提出了一种基于人脸轮廓的全自动人脸识别系统。先前的研究已经证明了这种生物特征的高鉴别潜力。然而,要成功地利用这一特征,面临着许多挑战,如轮廓几何对面旋转的敏感性和从图像中准确提取轮廓的难度。我们提出利用三维人脸模型来探索不同旋转下轮廓的特征空间。在入组模式中,获取受试者的三维数据并用于创建不同旋转下的轮廓。从这些轮廓中提取的特征用于训练分类器。在识别模式下,利用改进的主动形状模型方法从侧视图图像中提取轮廓。我们使用来自公开可用数据库的数据验证提取器的准确性和分类算法的鲁棒性。
{"title":"Towards 3D-aided profile-based face recognition","authors":"B. Efraty, E. Ismailov, S. Shah, I. Kakadiaris","doi":"10.1109/BTAS.2009.5339078","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339078","url":null,"abstract":"In this paper, we present a fully automatic system for face recognition based on a silhouette of the face profile. Previous research has demonstrated the high discriminative potential of this biometric. However, for the successful employment of this characteristic one is confronted with many challenges, such as the sensitivity of a profile's geometry to face rotation and the difficulty of accurate profile extraction from images. We propose to explore the feature space of profiles under various rotations with the aid of a 3D face model. In the enrollment mode, 3D data of subjects are acquired and used to create profiles under different rotations. The features extracted from these profiles are used to train a classifier. In the identification mode, the profiles are extracted from side view images using a modified Active Shape Model approach. We validate the accuracy of the extractor and the robustness of classification algorithms using data from a publicly available database.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130661063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1