首页 > 最新文献

2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems最新文献

英文 中文
Pose manifold curvature is typically less near frontal face views 姿态流形曲率通常较少接近正面视图
Mohammad Nayeem Teli, J. Beveridge
This research presents a study of the geometry of the face manifold as a person changes their horizontal pose from one profile to another. Although, a lot of research has gone into aspects of determining an ideal pose for pose invariant face recognition, less has been done to present the manifold of the faces presented by these pose variations. The novelty of our approach lies in the presentation of a finely sampled profile-to-profile dataset that is analyzed using Locally Linear Embedding (LLE) to estimate the curvature of these manifolds. Our results indicate that the profile-to-profile manifold is less curved, hence more linear, in the region around the frontal view than for any other region of the manifold, i.e. pose.
这项研究提出了一个几何的脸歧管的研究,因为一个人改变他们的水平姿势从一个侧面到另一个。尽管在确定姿态不变人脸识别的理想姿态方面已经进行了大量的研究,但很少有研究展示这些姿态变化所呈现的人脸的多样性。我们的方法的新颖之处在于提供了一个精细采样的剖面到剖面数据集,该数据集使用局部线性嵌入(LLE)进行分析,以估计这些流形的曲率。我们的结果表明,剖面到剖面流形在正面视图周围的区域比流形的任何其他区域(即姿态)都更少弯曲,因此更线性。
{"title":"Pose manifold curvature is typically less near frontal face views","authors":"Mohammad Nayeem Teli, J. Beveridge","doi":"10.1109/BTAS.2009.5339070","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339070","url":null,"abstract":"This research presents a study of the geometry of the face manifold as a person changes their horizontal pose from one profile to another. Although, a lot of research has gone into aspects of determining an ideal pose for pose invariant face recognition, less has been done to present the manifold of the faces presented by these pose variations. The novelty of our approach lies in the presentation of a finely sampled profile-to-profile dataset that is analyzed using Locally Linear Embedding (LLE) to estimate the curvature of these manifolds. Our results indicate that the profile-to-profile manifold is less curved, hence more linear, in the region around the frontal view than for any other region of the manifold, i.e. pose.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126870588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Learning-based image representation and method for face recognition 基于学习的图像表示与人脸识别方法
Zhiming Liu, Chengjun Liu, Qingchuan Tao
This paper presents a novel method for face recognition. First, we generate the new image representation from the decorrelated hybrid color configurations rather than RGB color space via a learning algorithm. The learning algorithm, Principal Component Analysis (PCA) plus Fisher Linear Discriminant analysis (FLD), is able to derive the desired color transformation to generate a discriminating image representation that is optimal for face recognition. Second, we partition face image into some small patches, each of which can obtain its own color transformation, to reduce the effect of illumination variations. Thus, a patch-based novel image representation method is proposed for face recognition. Experiments on the Face Recognition Grand Challenge (FRGC) version 2 Experiment 4 show that the proposed method outperforms gray-scale image and some recent methods in face recognition.
提出了一种新的人脸识别方法。首先,我们通过学习算法从去相关的混合颜色配置而不是RGB颜色空间生成新的图像表示。学习算法,主成分分析(PCA)加上Fisher线性判别分析(FLD),能够得出所需的颜色变换,以生成最适合人脸识别的判别图像表示。其次,我们将人脸图像分割成一些小块,每个小块都可以获得自己的颜色变换,以减少光照变化的影响。为此,提出了一种基于补丁的人脸识别图像表示方法。在Face Recognition Grand Challenge (FRGC) version 2上的实验表明,该方法在人脸识别方面的性能优于灰度图像和一些最新的人脸识别方法。
{"title":"Learning-based image representation and method for face recognition","authors":"Zhiming Liu, Chengjun Liu, Qingchuan Tao","doi":"10.1109/BTAS.2009.5339012","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339012","url":null,"abstract":"This paper presents a novel method for face recognition. First, we generate the new image representation from the decorrelated hybrid color configurations rather than RGB color space via a learning algorithm. The learning algorithm, Principal Component Analysis (PCA) plus Fisher Linear Discriminant analysis (FLD), is able to derive the desired color transformation to generate a discriminating image representation that is optimal for face recognition. Second, we partition face image into some small patches, each of which can obtain its own color transformation, to reduce the effect of illumination variations. Thus, a patch-based novel image representation method is proposed for face recognition. Experiments on the Face Recognition Grand Challenge (FRGC) version 2 Experiment 4 show that the proposed method outperforms gray-scale image and some recent methods in face recognition.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115538542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A biometric database with rotating head videos and hand-drawn face sketches 有旋转头部视频和手绘面部草图的生物识别数据库
Hanan A. Al Nizami, Jeremy P. Adkins-Hill, Yong Zhang, J. Sullins, Christine McCullough, Shaun J. Canavan, L. Yin
The past decade has witnessed a significant progress in biometric technologies, to a large degree, due to the availability of a wide variety of public databases that enable benchmark performance evaluations. In this paper, we describe a new database that includes: (i) Rotating head videos of 259 subjects; (ii) 250 hand-drawn face sketches of 50 subjects. Rotating head videos were acquired under both normal indoor lighting and shadow conditions. Each video captured four expressions: neutral, smile, surprise, and anger. For each subject, video frames of ten pose angles were manually labeled using reference images and empirical rules, to facilitate the investigation of multi-frame fusion. The database can also be used to study 3D face recognition by reconstructing a 3D face model from videos. In addition, this is the only currently available database that has a large number of face sketches drawn by multiple artists. The face sketches are valuable resource for many researches, such as forensic analysis of eyewitness recollection, impact assessment of face degradation on recognition rate, as well as comparative evaluation of sketch recognitions by humans and algorithms.
过去十年见证了生物识别技术的重大进步,在很大程度上是由于各种各样的公共数据库的可用性,使基准性能评估成为可能。在本文中,我们描述了一个新的数据库,其中包括:(i) 259个受试者的旋转头部视频;(ii) 50名受试者的250张手绘脸部草图。在正常的室内照明和阴影条件下获得旋转头部视频。每个视频捕捉了四种表情:中性、微笑、惊讶和愤怒。对于每个受试者,使用参考图像和经验规则手动标记10个姿态角度的视频帧,以便于研究多帧融合。该数据库还可以通过从视频中重建3D人脸模型来研究3D人脸识别。此外,这是目前唯一一个拥有多个艺术家绘制的大量面部草图的数据库。人脸草图对于目击者回忆的法医分析、人脸退化对识别率的影响评估以及人脸识别的人类和算法的比较评价等研究都是非常宝贵的资源。
{"title":"A biometric database with rotating head videos and hand-drawn face sketches","authors":"Hanan A. Al Nizami, Jeremy P. Adkins-Hill, Yong Zhang, J. Sullins, Christine McCullough, Shaun J. Canavan, L. Yin","doi":"10.1109/BTAS.2009.5339043","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339043","url":null,"abstract":"The past decade has witnessed a significant progress in biometric technologies, to a large degree, due to the availability of a wide variety of public databases that enable benchmark performance evaluations. In this paper, we describe a new database that includes: (i) Rotating head videos of 259 subjects; (ii) 250 hand-drawn face sketches of 50 subjects. Rotating head videos were acquired under both normal indoor lighting and shadow conditions. Each video captured four expressions: neutral, smile, surprise, and anger. For each subject, video frames of ten pose angles were manually labeled using reference images and empirical rules, to facilitate the investigation of multi-frame fusion. The database can also be used to study 3D face recognition by reconstructing a 3D face model from videos. In addition, this is the only currently available database that has a large number of face sketches drawn by multiple artists. The face sketches are valuable resource for many researches, such as forensic analysis of eyewitness recollection, impact assessment of face degradation on recognition rate, as well as comparative evaluation of sketch recognitions by humans and algorithms.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116602479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Robust modified Active Shape Model for automatic facial landmark annotation of frontal faces 基于鲁棒改进主动形状模型的正面人脸特征自动标注
Keshav Seshadri, M. Savvides
In this paper we present an improved method for locating facial landmarks in images containing frontal faces using a modified Active Shape Model. Our main contributions include the use of an optimal number of facial landmark points, better profiling methods during the fitting stage and the development of a more suitable optimization metric to determine the best location of the landmarks compared to the simplistic minimum Mahalanobis distance criteria used to date. We build a subspace to model variations of appearance around each facial landmark and use this subspace to enhance the accuracy of the fitting process around each landmark. This enhancement provides a significant improvement in fitting and simultaneously determines which points were poorly fitted using reconstruction error, thus allowing for automatic correction or interpolation of any poorly fitted points. Our implementation, with the above mentioned improvements, leads to extremely accurate results even when dealing with faces with expressions, slight pose variations and in-plane rotations. Experiments conducted on test sets drawn from three databases (NIST Multiple Biometric Grand Challenge-2008 (MBGC-2008), CMU Multi-PIE and the Japanese Female Facial Expression (JAFFE) database) show that our proposed approach leads to far better performance compared to the classical Active Shape Model of Cootes et al. and other traditional methods and provides a robust automatic facial landmark annotation which is the first critical step in face registration, pose correction and face recognition.
在本文中,我们提出了一种改进的方法来定位面部地标图像中包含正面脸使用改进的主动形状模型。我们的主要贡献包括使用最优数量的面部地标点,在拟合阶段使用更好的分析方法,以及开发更合适的优化度量来确定地标的最佳位置,而不是迄今为止使用的最简单的最小马氏距离标准。我们建立了一个子空间来模拟每个面部地标周围的外观变化,并使用该子空间来提高每个地标周围拟合过程的准确性。这种增强在拟合方面提供了显著的改进,同时确定哪些点使用重建误差拟合不良,从而允许对任何拟合不良的点进行自动校正或插值。我们的实现,通过上面提到的改进,即使在处理面部表情,轻微的姿势变化和面内旋转时,也会产生非常准确的结果。在三个数据库(NIST多重生物特征大挑战-2008 (MBGC-2008)、CMU Multi-PIE和日本女性面部表情(JAFFE)数据库)的测试集上进行的实验表明,与Cootes等人的经典主动形状模型和其他传统方法相比,我们提出的方法具有更好的性能,并提供了鲁棒的自动面部地标标注,这是人脸配位的第一步。姿态校正和人脸识别。
{"title":"Robust modified Active Shape Model for automatic facial landmark annotation of frontal faces","authors":"Keshav Seshadri, M. Savvides","doi":"10.1109/BTAS.2009.5339057","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339057","url":null,"abstract":"In this paper we present an improved method for locating facial landmarks in images containing frontal faces using a modified Active Shape Model. Our main contributions include the use of an optimal number of facial landmark points, better profiling methods during the fitting stage and the development of a more suitable optimization metric to determine the best location of the landmarks compared to the simplistic minimum Mahalanobis distance criteria used to date. We build a subspace to model variations of appearance around each facial landmark and use this subspace to enhance the accuracy of the fitting process around each landmark. This enhancement provides a significant improvement in fitting and simultaneously determines which points were poorly fitted using reconstruction error, thus allowing for automatic correction or interpolation of any poorly fitted points. Our implementation, with the above mentioned improvements, leads to extremely accurate results even when dealing with faces with expressions, slight pose variations and in-plane rotations. Experiments conducted on test sets drawn from three databases (NIST Multiple Biometric Grand Challenge-2008 (MBGC-2008), CMU Multi-PIE and the Japanese Female Facial Expression (JAFFE) database) show that our proposed approach leads to far better performance compared to the classical Active Shape Model of Cootes et al. and other traditional methods and provides a robust automatic facial landmark annotation which is the first critical step in face registration, pose correction and face recognition.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117094099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Simultaneous latent fingerprint recognition: A preliminary study 同时潜指纹识别的初步研究
Mayank Vatsa, Richa Singh, A. Noore, Keith B. Morris
Recent cases such as Commonwealth v Patterson show that there is a lack of research in how to process and recognize simultaneous fingerprint impressions, especially when none of the latent prints in the cluster could be individually matched. SWGFAST released the first version of the standard on simultaneous impression examination that can help fingerprint examiners to systematically compare latent simultaneous impressions to a known ten-print card. However, when the individual is not known, the simultaneous fingerprint impressions have to be compared using a large database of reference ten-prints, making the process very challenging. This paper introduces the research problem of identifying simultaneous latent fingerprint impressions to the community and presents a semi-automatic approach to process the impressions of any individual. The approach generates a list of top matches and latent fingerprint examiners can then examine them for individualization. Using a fingerprint database that contains simultaneous latent impressions, we analyze the performance of the proposed approach obtained by matching simultaneous impressions with the gallery database.
最近的英联邦诉帕特森案(Commonwealth v Patterson)表明,缺乏对如何处理和识别同时存在的指纹印象的研究,特别是当集群中没有一个潜在指纹可以单独匹配时。SWGFAST发布了第一个版本的同时印痕检查标准,可以帮助指纹审查员系统地将潜在的同时印痕与已知的十印卡进行比较。然而,当不知道个人身份时,必须使用一个大型的参考指纹数据库来比较同时产生的指纹印象,这使得这一过程非常具有挑战性。本文介绍了社会上对同时潜在指纹印识别的研究问题,提出了一种半自动处理任意个体指纹印的方法。该方法生成一个顶级匹配列表,然后潜在指纹检测器可以对它们进行个性化检查。使用包含同时潜在印象的指纹数据库,我们通过将同时印象与画廊数据库进行匹配来分析所提出的方法的性能。
{"title":"Simultaneous latent fingerprint recognition: A preliminary study","authors":"Mayank Vatsa, Richa Singh, A. Noore, Keith B. Morris","doi":"10.1109/BTAS.2009.5339079","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339079","url":null,"abstract":"Recent cases such as Commonwealth v Patterson show that there is a lack of research in how to process and recognize simultaneous fingerprint impressions, especially when none of the latent prints in the cluster could be individually matched. SWGFAST released the first version of the standard on simultaneous impression examination that can help fingerprint examiners to systematically compare latent simultaneous impressions to a known ten-print card. However, when the individual is not known, the simultaneous fingerprint impressions have to be compared using a large database of reference ten-prints, making the process very challenging. This paper introduces the research problem of identifying simultaneous latent fingerprint impressions to the community and presents a semi-automatic approach to process the impressions of any individual. The approach generates a list of top matches and latent fingerprint examiners can then examine them for individualization. Using a fingerprint database that contains simultaneous latent impressions, we analyze the performance of the proposed approach obtained by matching simultaneous impressions with the gallery database.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128294282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Generalized multi-ethnic face age-estimation 广义多民族面部年龄估计
K. Ricanek, Yishi Wang, Cuixian Chen, S. J. Simmons
Age estimation from digital pictures of the face is a very promising research field that is now receiving wide attention. As with any good research problem, face age-estimation is wrought with many challenging interactions that cannot easily be separated out. In general, aging patterns are well understood for all humans, however, these patterns become confounded by intrinsic factors of genetics, gender differences, and ethnic deviations and, equally as important, extrinsic factors of the environment and behavior choices (i.e. sun exposure, drugs, cigarettes, etc). This novel work focuses on the development of a generalized multi-ethnic age-estimation technique — the first of its kind. In addition to the novelty of this approach, the system's overall performance measure (MAE) is “on par” with algorithms that are tuned for a specific ethnic group. Further, the proposed system performance proves to be far more stable across age than the best published results.
基于数字人脸图像的年龄估计是一个非常有前途的研究领域,目前正受到广泛关注。与任何好的研究问题一样,面部年龄估计是由许多具有挑战性的相互作用构成的,这些相互作用不容易分离出来。总的来说,所有人的衰老模式都很好理解,然而,这些模式会被遗传、性别差异和种族偏差等内在因素以及同样重要的环境和行为选择等外在因素(如阳光照射、药物、香烟等)所混淆。这项新颖的工作集中在一个广义的多民族年龄估计技术的发展-它的第一个。除了这种方法的新颖性之外,该系统的整体性能衡量(MAE)与针对特定种族群体进行调整的算法“不相上下”。此外,所提出的系统性能证明比已发表的最佳结果在年龄上更加稳定。
{"title":"Generalized multi-ethnic face age-estimation","authors":"K. Ricanek, Yishi Wang, Cuixian Chen, S. J. Simmons","doi":"10.1109/BTAS.2009.5339082","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339082","url":null,"abstract":"Age estimation from digital pictures of the face is a very promising research field that is now receiving wide attention. As with any good research problem, face age-estimation is wrought with many challenging interactions that cannot easily be separated out. In general, aging patterns are well understood for all humans, however, these patterns become confounded by intrinsic factors of genetics, gender differences, and ethnic deviations and, equally as important, extrinsic factors of the environment and behavior choices (i.e. sun exposure, drugs, cigarettes, etc). This novel work focuses on the development of a generalized multi-ethnic age-estimation technique — the first of its kind. In addition to the novelty of this approach, the system's overall performance measure (MAE) is “on par” with algorithms that are tuned for a specific ethnic group. Further, the proposed system performance proves to be far more stable across age than the best published results.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131980291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Comparing verification performance of kids and adults for Fingerprint, Palmprint, Hand-geometry and Digitprint biometrics 比较儿童和成人对指纹、掌纹、手纹和数字指纹的验证性能
A. Uhl, Peter Wild
With the large scale deployment of biometrics for access control in private and public places, systems are faced the challenge of processing a diverse range of people. Most systems have been well evaluated for adults, however, their application in schools, or for private door access control, raises the question, whether there exists significant difference in performance between age groups in general and between kids and adults in particular. This paper targets an evaluation of the impact of children as biometric users on recognition accuracy for a series of hand-based modalities: Fingerprint, Palmprint, Hand-geometry and Digitprint. Furthermore, we try to analyze reasons for child-aging effects on performance at both feature and instance level using our database of 301 kids and 86 adults.
随着生物识别技术在私人和公共场所的大规模部署,系统面临着处理各种各样的人的挑战。大多数系统已经对成人进行了很好的评估,然而,它们在学校或私人门访问控制中的应用提出了一个问题,即在一般年龄组之间,特别是在儿童和成人之间,是否存在显著的性能差异。本文旨在评估儿童作为生物识别用户对一系列基于手的模式(指纹,掌纹,手几何和数字指纹)识别准确性的影响。此外,我们尝试使用包含301名儿童和86名成人的数据库,在特征和实例级别分析儿童年龄对性能影响的原因。
{"title":"Comparing verification performance of kids and adults for Fingerprint, Palmprint, Hand-geometry and Digitprint biometrics","authors":"A. Uhl, Peter Wild","doi":"10.1109/BTAS.2009.5339069","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339069","url":null,"abstract":"With the large scale deployment of biometrics for access control in private and public places, systems are faced the challenge of processing a diverse range of people. Most systems have been well evaluated for adults, however, their application in schools, or for private door access control, raises the question, whether there exists significant difference in performance between age groups in general and between kids and adults in particular. This paper targets an evaluation of the impact of children as biometric users on recognition accuracy for a series of hand-based modalities: Fingerprint, Palmprint, Hand-geometry and Digitprint. Furthermore, we try to analyze reasons for child-aging effects on performance at both feature and instance level using our database of 301 kids and 86 adults.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133830566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Partial matching of interpose 3D facial data for face recognition 用于人脸识别的插入式三维人脸数据部分匹配
P. Perakis, G. Passalis, T. Theoharis, G. Toderici, I. Kakadiaris
Three-dimensional face recognition has lately received much attention due to its robustness in the presence of lighting and pose variations. However, certain pose variations often result in missing facial data. This is common in realistic scenarios, such as uncontrolled environments and uncooperative subjects. Most previous 3D face recognition methods do not handle extensive missing data as they rely on frontal scans. Currently, there is no method to perform recognition across scans of different poses. A unified method that addresses the partial matching problem is proposed. Both frontal and side (left or right) facial scans are handled in a way that allows interpose retrieval operations. The main contributions of this paper include a novel 3D landmark detector and a deformable model framework that supports symmetric fitting. The landmark detector is utilized to detect the pose of the facial scan. This information is used to mark areas of missing data and to roughly register the facial scan with an Annotated Face Model (AFM). The AFM is fitted using a deformable model framework that introduces the method of exploiting facial symmetry where data are missing. Subsequently, a geometry image is extracted from the fitted AFM that is independent of the original pose of the facial scan. Retrieval operations, such as face identification, are then performed on a wavelet domain representation of the geometry image. Thorough testing was performed by combining the largest publicly available databases. To the best of our knowledge, this is the first method that handles side scans with extensive missing data (e.g., up to half of the face missing).
三维人脸识别因其在光照和姿态变化下的鲁棒性而受到广泛关注。然而,某些姿势的变化往往会导致面部数据的丢失。这在现实场景中很常见,比如不受控制的环境和不合作的对象。大多数以前的3D人脸识别方法不能处理大量丢失的数据,因为它们依赖于正面扫描。目前,还没有一种方法可以在不同姿势的扫描中进行识别。提出了一种解决部分匹配问题的统一方法。正面和侧面(左或右)面部扫描的处理方式允许介入检索操作。本文的主要贡献包括一种新的三维地标检测器和支持对称拟合的可变形模型框架。利用地标检测器检测人脸扫描的姿态。该信息用于标记缺失数据的区域,并与注释面部模型(AFM)粗略注册面部扫描。AFM使用可变形的模型框架进行拟合,该框架引入了利用数据缺失的面部对称性的方法。随后,从拟合的AFM中提取与面部扫描原始姿态无关的几何图像。检索操作,如人脸识别,然后在几何图像的小波域表示上执行。通过结合最大的公开可用数据库进行了彻底的测试。据我们所知,这是处理大量缺失数据(例如,多达一半的脸缺失)的侧扫描的第一种方法。
{"title":"Partial matching of interpose 3D facial data for face recognition","authors":"P. Perakis, G. Passalis, T. Theoharis, G. Toderici, I. Kakadiaris","doi":"10.1109/BTAS.2009.5339019","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339019","url":null,"abstract":"Three-dimensional face recognition has lately received much attention due to its robustness in the presence of lighting and pose variations. However, certain pose variations often result in missing facial data. This is common in realistic scenarios, such as uncontrolled environments and uncooperative subjects. Most previous 3D face recognition methods do not handle extensive missing data as they rely on frontal scans. Currently, there is no method to perform recognition across scans of different poses. A unified method that addresses the partial matching problem is proposed. Both frontal and side (left or right) facial scans are handled in a way that allows interpose retrieval operations. The main contributions of this paper include a novel 3D landmark detector and a deformable model framework that supports symmetric fitting. The landmark detector is utilized to detect the pose of the facial scan. This information is used to mark areas of missing data and to roughly register the facial scan with an Annotated Face Model (AFM). The AFM is fitted using a deformable model framework that introduces the method of exploiting facial symmetry where data are missing. Subsequently, a geometry image is extracted from the fitted AFM that is independent of the original pose of the facial scan. Retrieval operations, such as face identification, are then performed on a wavelet domain representation of the geometry image. Thorough testing was performed by combining the largest publicly available databases. To the best of our knowledge, this is the first method that handles side scans with extensive missing data (e.g., up to half of the face missing).","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132146279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Agent-based image iris segmentation and multiple views boundary refining 基于agent的图像虹膜分割与多视图边界细化
R. D. Labati, V. Piuri, F. Scotti
The paper presents two different methods to deal with the problem of iris segmentation: an agent-based method capable to localize the center of the pupil and a method to process the iris boundaries by a multiple views approach. In the first method, an agent corresponds to the coordinates of a specific point of analysis in the input image. A population of agents is deployed in the input image, then, each agent collects local information concerning the intensity patterns visible in its region of interest. By iterations, an agent changes its position accordingly to the local properties, moving towards the estimation of the pupil center. If no available information is present in its region of interest, the agent will move itself along a random walk. After few iterations, the population tends to spread and then concentrate in the inner portion of the pupil. Once the center of the pupil has been located, the inner and outer iris boundaries are refined by an approach based on multiple views analysis. This method starts with a set of points that can be considered as an approximation of the pupil center. For each point, a detailed estimation of the iris boundaries is computed, and the final description of the iris boundaries is obtained by merging all the obtained descriptions. The two methods were tested using CASIA v.3 and UBIRIS v.2 images. Experiments show that the proposed approaches are feasible, also in eye images taken in noisy or non-ideal conditions, achieving a total error segmentation accuracy up to 97%.
本文提出了两种不同的方法来处理虹膜分割问题:一种基于agent的瞳孔中心定位方法和一种基于多视图的虹膜边界处理方法。在第一种方法中,代理对应于输入图像中特定分析点的坐标。在输入图像中部署一组代理,然后,每个代理收集有关其感兴趣区域中可见的强度模式的本地信息。通过迭代,agent根据局部属性改变自己的位置,向瞳孔中心的估计移动。如果在其感兴趣的区域内没有可用的信息,代理将沿着随机行走移动自己。经过几次迭代,种群倾向于扩散,然后集中在瞳孔的内部。瞳孔中心定位后,采用基于多视图分析的方法细化内外虹膜边界。这种方法从一组点开始,这些点可以被认为是瞳孔中心的近似值。对于每个点,计算虹膜边界的详细估计,并将得到的所有描述合并得到虹膜边界的最终描述。使用CASIA v.3和UBIRIS v.2图像对两种方法进行了测试。实验表明,该方法是可行的,对于噪声或非理想条件下的人眼图像,总误差分割精度可达97%。
{"title":"Agent-based image iris segmentation and multiple views boundary refining","authors":"R. D. Labati, V. Piuri, F. Scotti","doi":"10.1109/BTAS.2009.5339077","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339077","url":null,"abstract":"The paper presents two different methods to deal with the problem of iris segmentation: an agent-based method capable to localize the center of the pupil and a method to process the iris boundaries by a multiple views approach. In the first method, an agent corresponds to the coordinates of a specific point of analysis in the input image. A population of agents is deployed in the input image, then, each agent collects local information concerning the intensity patterns visible in its region of interest. By iterations, an agent changes its position accordingly to the local properties, moving towards the estimation of the pupil center. If no available information is present in its region of interest, the agent will move itself along a random walk. After few iterations, the population tends to spread and then concentrate in the inner portion of the pupil. Once the center of the pupil has been located, the inner and outer iris boundaries are refined by an approach based on multiple views analysis. This method starts with a set of points that can be considered as an approximation of the pupil center. For each point, a detailed estimation of the iris boundaries is computed, and the final description of the iris boundaries is obtained by merging all the obtained descriptions. The two methods were tested using CASIA v.3 and UBIRIS v.2 images. Experiments show that the proposed approaches are feasible, also in eye images taken in noisy or non-ideal conditions, achieving a total error segmentation accuracy up to 97%.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128677154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
A novel matching algorithm for distorted fingerprints based on penalized quadratic model 一种基于惩罚二次模型的扭曲指纹匹配新算法
Kai Cao, Xin Yang, Xunqiang Tao, Yangyang Zhang, Jie Tian
At present, one of the most challenging problems in fingerprint recognition is the matching of distorted fingerprints. In this paper, we propose penalized quadratic model to deal with the non-linear distortion. Firstly, minutiae as well as sampling points on all the ridges are employed to represent fingerprint. Secondly, similarity between minutiae is estimated by their neighboring sampling points. Thirdly, greedy matching algorithm is adopted to establish the initial minutiae correspondences which are used to select landmarks to calculate the quadratic model parameters. At last, input fingerprint is warped and matching process is conducted again to obtain similarity score between warped fingerprint and template fingerprint. In order to diminish the impact of the erroneous landmarks, we introduce a penalty term into the quadratic model to keep it smoothing. Experimental results on FVC2004 DB1 approve that quadratic model is effective to describe the inner-image transformation of a quadratic skin surface, and the proposed strategy can improve the performance of fingerprint matching algorithm.
目前,指纹识别中最具挑战性的问题之一是扭曲指纹的匹配问题。在本文中,我们提出了惩罚二次模型来处理非线性失真。首先,利用所有脊上的细节和采样点来表示指纹;其次,通过相邻采样点估计细节点之间的相似度;第三,采用贪婪匹配算法建立初始细节对应关系,用于选择地标,计算二次模型参数;最后,对输入指纹进行扭曲,并再次进行匹配处理,得到扭曲指纹与模板指纹的相似度评分。为了减少错误地标的影响,我们在二次模型中引入了惩罚项,以保持模型的平滑性。在FVC2004 DB1上的实验结果表明,二次型模型能够有效地描述二次型皮肤表面的内图像变换,该策略能够提高指纹匹配算法的性能。
{"title":"A novel matching algorithm for distorted fingerprints based on penalized quadratic model","authors":"Kai Cao, Xin Yang, Xunqiang Tao, Yangyang Zhang, Jie Tian","doi":"10.1109/BTAS.2009.5339018","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339018","url":null,"abstract":"At present, one of the most challenging problems in fingerprint recognition is the matching of distorted fingerprints. In this paper, we propose penalized quadratic model to deal with the non-linear distortion. Firstly, minutiae as well as sampling points on all the ridges are employed to represent fingerprint. Secondly, similarity between minutiae is estimated by their neighboring sampling points. Thirdly, greedy matching algorithm is adopted to establish the initial minutiae correspondences which are used to select landmarks to calculate the quadratic model parameters. At last, input fingerprint is warped and matching process is conducted again to obtain similarity score between warped fingerprint and template fingerprint. In order to diminish the impact of the erroneous landmarks, we introduce a penalty term into the quadratic model to keep it smoothing. Experimental results on FVC2004 DB1 approve that quadratic model is effective to describe the inner-image transformation of a quadratic skin surface, and the proposed strategy can improve the performance of fingerprint matching algorithm.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1