首页 > 最新文献

2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems最新文献

英文 中文
On-line signature authentication using Zernike moments 使用泽尼克矩的在线签名认证
K. Radhika, M. K. Venkatesha, G N Shekar
Zernike moments are image descriptors often used in pattern recognition. They offer rotation invariance. In this paper, we discuss a novel method of signature authentication using Zernike moments. Instead of working on primary features such as image or on-line data, working on the derived kinematic plot is a robust way of authentication. The derived kinematic plot considered in this paper is acceleration plot. Each signature's on-line acceleration information is being weighted by Zernike moment. The shape analysis of the acceleration plot, using only lower order Zernike moments is performed for authentication of on-line signature.
泽尼克矩是模式识别中常用的图像描述符。它们提供旋转不变性。本文讨论了一种新的基于泽尼克矩的签名认证方法。与处理图像或在线数据等主要特征不同,处理导出的运动学图是一种鲁棒的认证方式。本文所考虑的导出的运动学图为加速度图。每个签名的在线加速度信息被泽尼克矩加权。利用低阶泽尼克矩对加速度图进行形状分析,用于在线签名的认证。
{"title":"On-line signature authentication using Zernike moments","authors":"K. Radhika, M. K. Venkatesha, G N Shekar","doi":"10.1109/BTAS.2009.5339022","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339022","url":null,"abstract":"Zernike moments are image descriptors often used in pattern recognition. They offer rotation invariance. In this paper, we discuss a novel method of signature authentication using Zernike moments. Instead of working on primary features such as image or on-line data, working on the derived kinematic plot is a robust way of authentication. The derived kinematic plot considered in this paper is acceleration plot. Each signature's on-line acceleration information is being weighted by Zernike moment. The shape analysis of the acceleration plot, using only lower order Zernike moments is performed for authentication of on-line signature.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125339227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Pitfall of the Detection Rate Optimized Bit Allocation within template protection and a remedy 模板保护中检测率优化位分配的缺陷及补救措施
E. Kelkboom, K.T.J. de Groot, C. Chen, J. Breebaart, R. Veldhuis
One of the requirements of a biometric template protection system is that the protected template ideally should not leak any information about the biometric sample or its derivatives. In the literature, several proposed template protection techniques are based on binary vectors. Hence, they require the extraction of a binary representation from the real- valued biometric sample. In this work we focus on the Detection Rate Optimized Bit Allocation (DROBA) quantization scheme that extracts multiple bits per feature component while maximizing the overall detection rate. The allocation strategy has to be stored as auxiliary data for reuse in the verification phase and is considered as public. This implies that the auxiliary data should not leak any information about the extracted binary representation. Experiments in our work show that the original DROBA algorithm, as known in the literature, creates auxiliary data that leaks a significant amount of information. We show how an adversary is able to exploit this information and significantly increase its success rate on obtaining a false accept. Fortunately, the information leakage can be mitigated by restricting the allocation freedom of the DROBA algorithm. We propose a method based on population statistics and empirically illustrate its effectiveness. All the experiments are based on the MCYT fingerprint database using two different texture based feature extraction algorithms.
生物特征模板保护系统的要求之一是理想的保护模板不应泄露任何关于生物特征样本或其衍生物的信息。在文献中,提出了几种基于二值向量的模板保护技术。因此,他们需要从实值生物特征样本中提取二值表示。在这项工作中,我们重点研究了检测率优化比特分配(DROBA)量化方案,该方案在最大化整体检测率的同时提取每个特征分量的多个比特。分配策略必须作为辅助数据存储,以便在验证阶段重用,并且被认为是公共的。这意味着辅助数据不应该泄露有关提取的二进制表示的任何信息。我们工作中的实验表明,文献中已知的原始DROBA算法创建了泄露大量信息的辅助数据。我们展示了攻击者如何能够利用这些信息并显著提高其获得错误接受的成功率。幸运的是,可以通过限制DROBA算法的分配自由来减轻信息泄漏。本文提出了一种基于人口统计的方法,并对其有效性进行了实证验证。所有的实验都是基于MCYT指纹数据库,使用两种不同的基于纹理的特征提取算法。
{"title":"Pitfall of the Detection Rate Optimized Bit Allocation within template protection and a remedy","authors":"E. Kelkboom, K.T.J. de Groot, C. Chen, J. Breebaart, R. Veldhuis","doi":"10.1109/BTAS.2009.5339046","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339046","url":null,"abstract":"One of the requirements of a biometric template protection system is that the protected template ideally should not leak any information about the biometric sample or its derivatives. In the literature, several proposed template protection techniques are based on binary vectors. Hence, they require the extraction of a binary representation from the real- valued biometric sample. In this work we focus on the Detection Rate Optimized Bit Allocation (DROBA) quantization scheme that extracts multiple bits per feature component while maximizing the overall detection rate. The allocation strategy has to be stored as auxiliary data for reuse in the verification phase and is considered as public. This implies that the auxiliary data should not leak any information about the extracted binary representation. Experiments in our work show that the original DROBA algorithm, as known in the literature, creates auxiliary data that leaks a significant amount of information. We show how an adversary is able to exploit this information and significantly increase its success rate on obtaining a false accept. Fortunately, the information leakage can be mitigated by restricting the allocation freedom of the DROBA algorithm. We propose a method based on population statistics and empirically illustrate its effectiveness. All the experiments are based on the MCYT fingerprint database using two different texture based feature extraction algorithms.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114722816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Age estimation using Active Appearance Models and Support Vector Machine regression 使用活动外观模型和支持向量机回归进行年龄估计
Khoa Luu, K. Ricanek, T. D. Bui, C. Suen
In this paper, we introduce a novel age estimation technique that combines Active Appearance Models (AAMs) and Support Vector Machines (SVMs), to dramatically improve the accuracy of age estimation over the current state-of-the-art techniques. In this method, characteristics of the input images, face image, are interpreted as feature vectors by AAMs, which are used to discriminate between childhood and adulthood, prior to age estimation. Faces classified as adults are passed to the adult age-determination function and the others are passed to the child age-determination function. Compared to published results, this method yields the highest accuracy recognition rates, both in overall mean-absolute error (MAE) and mean-absolute error for the two periods of human development: childhood and adulthood.
在本文中,我们引入了一种新的年龄估计技术,该技术结合了活动外观模型(AAMs)和支持向量机(svm),大大提高了当前最先进技术的年龄估计精度。在该方法中,人脸图像的特征被AAMs解释为特征向量,用于区分儿童和成年,然后再进行年龄估计。被分类为成人的面孔被传递给成人年龄确定函数,其他面孔被传递给儿童年龄确定函数。与已发表的结果相比,该方法产生了最高的准确率识别率,无论是在总体平均绝对误差(MAE)和平均绝对误差两个人类发展时期:童年和成年。
{"title":"Age estimation using Active Appearance Models and Support Vector Machine regression","authors":"Khoa Luu, K. Ricanek, T. D. Bui, C. Suen","doi":"10.1109/BTAS.2009.5339053","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339053","url":null,"abstract":"In this paper, we introduce a novel age estimation technique that combines Active Appearance Models (AAMs) and Support Vector Machines (SVMs), to dramatically improve the accuracy of age estimation over the current state-of-the-art techniques. In this method, characteristics of the input images, face image, are interpreted as feature vectors by AAMs, which are used to discriminate between childhood and adulthood, prior to age estimation. Faces classified as adults are passed to the adult age-determination function and the others are passed to the child age-determination function. Compared to published results, this method yields the highest accuracy recognition rates, both in overall mean-absolute error (MAE) and mean-absolute error for the two periods of human development: childhood and adulthood.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124515050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 156
A new approach to unwrap a 3-D fingerprint to a 2-D rolled equivalent fingerprint 一种将三维指纹展开为二维滚动等效指纹的新方法
S. Shafaei, T. Inanc, L. Hassebrook
For many years, fingerprints have been captured by pressing a finger against a paper or hard surface. This touch-based fingerprint acquisition introduces some problems such as distortions and deformations in the acquired images, which arise due to the contact of the fingerprint surface with the sensor platen, and degrades the recognition performance. A new touch-less fingerprint technology has been recently introduced to the market, which can address the problems with the contact-based fingerprint systems. In this paper, we propose a new algorithm for unwrapping the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image. Therefore, The resulting image can be matched with the conventional 2-D scans; it also can be used for matching unwrapped 3-D fingerprints among themselves with the 2-D fingerprint matching algorithms. The algorithm is based on curvature analysis of the 3-D surface. The quality of the resulting image is evaluated and analyzed using NIST fingerprint image software.
多年来,指纹是通过手指按压纸张或坚硬的表面来获取的。这种基于触摸的指纹采集方法,由于指纹表面与传感器平台接触,会导致采集到的图像出现畸变和变形等问题,从而降低了识别性能。一种新的非接触式指纹技术最近被推向市场,它可以解决接触式指纹系统的问题。在本文中,我们提出了一种新的算法,将获得的受试者手指的三维扫描图像解包裹成二维滚动等效图像。因此,得到的图像可以与传统的二维扫描相匹配;也可用于将未包裹的三维指纹与二维指纹匹配算法进行匹配。该算法基于三维曲面的曲率分析。使用NIST指纹图像软件对生成的图像质量进行评估和分析。
{"title":"A new approach to unwrap a 3-D fingerprint to a 2-D rolled equivalent fingerprint","authors":"S. Shafaei, T. Inanc, L. Hassebrook","doi":"10.1109/BTAS.2009.5339023","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339023","url":null,"abstract":"For many years, fingerprints have been captured by pressing a finger against a paper or hard surface. This touch-based fingerprint acquisition introduces some problems such as distortions and deformations in the acquired images, which arise due to the contact of the fingerprint surface with the sensor platen, and degrades the recognition performance. A new touch-less fingerprint technology has been recently introduced to the market, which can address the problems with the contact-based fingerprint systems. In this paper, we propose a new algorithm for unwrapping the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image. Therefore, The resulting image can be matched with the conventional 2-D scans; it also can be used for matching unwrapped 3-D fingerprints among themselves with the 2-D fingerprint matching algorithms. The algorithm is based on curvature analysis of the 3-D surface. The quality of the resulting image is evaluated and analyzed using NIST fingerprint image software.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126061209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Dynamic three-bin real AdaBoost using biased classifiers: An application in face detection 动态三箱真实AdaBoost使用有偏差分类器:在人脸检测中的应用
R. Abiantun, M. Savvides
In this paper, we briefly review AdaBoost and expand on the Discrete version by building weak classifiers from a pair of biased classifiers which enable the weak classifier to abstain from classifying some samples. We show that this approach turns into a 3-bin Real AdaBoost approach where the bin sizes and positions are set by the bias parameters selected by the user and dynamically change with every iteration which make it different from the traditional Real AdaBoost. We apply this method to face detection more specifically the Viola-Jones approach to detecting faces with Haar-like features and empirically show that our method can help improving the generalization ability by reducing the testing error of the final classifier. We benchmark the results on the MIT+CMU database.
在本文中,我们简要地回顾了AdaBoost,并通过从一对有偏见的分类器中构建弱分类器来扩展离散版本,使弱分类器能够避免对某些样本进行分类。我们表明,这种方法变成了一个3-bin的Real AdaBoost方法,其中bin的大小和位置由用户选择的偏差参数设置,并随着每次迭代而动态变化,使其与传统的Real AdaBoost不同。我们将该方法应用于人脸检测,更具体地说,Viola-Jones方法用于检测具有haar样特征的人脸,并通过经验表明,我们的方法可以通过减少最终分类器的测试误差来帮助提高泛化能力。我们在MIT+CMU数据库上对结果进行了基准测试。
{"title":"Dynamic three-bin real AdaBoost using biased classifiers: An application in face detection","authors":"R. Abiantun, M. Savvides","doi":"10.1109/BTAS.2009.5339038","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339038","url":null,"abstract":"In this paper, we briefly review AdaBoost and expand on the Discrete version by building weak classifiers from a pair of biased classifiers which enable the weak classifier to abstain from classifying some samples. We show that this approach turns into a 3-bin Real AdaBoost approach where the bin sizes and positions are set by the bias parameters selected by the user and dynamically change with every iteration which make it different from the traditional Real AdaBoost. We apply this method to face detection more specifically the Viola-Jones approach to detecting faces with Haar-like features and empirically show that our method can help improving the generalization ability by reducing the testing error of the final classifier. We benchmark the results on the MIT+CMU database.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129235667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparsity inspired selection and recognition of iris images 稀疏性启发虹膜图像的选择和识别
Jaishanker K. Pillai, Vishal M. Patel, R. Chellappa
Iris images acquired from a partially cooperating subject often suffer from blur, occlusion due to eyelids, and specular reflections. The performance of existing iris recognition systems degrade significantly on these images. Hence it is essential to select good images from the incoming iris video stream, before they are input to the recognition algorithm. In this paper, we propose a sparsity based algorithm for selection of good iris images and their subsequent recognition. Unlike most existing algorithms for iris image selection, our method can handle segmentation errors and a wider range of acquisition artifacts common in iris image capture. We perform selection and recognition in a single step which is more efficient than devising separate specialized algorithms for the two. Recognition from partially cooperating users is a significant step towards deploying iris systems in a wide variety of applications.
虹膜图像从一个部分合作的主体往往遭受模糊,遮挡由于眼睑,和镜面反射。现有的虹膜识别系统在这些图像上的性能明显下降。因此,在将虹膜视频流输入识别算法之前,必须从传入的虹膜视频流中选择好的图像。本文提出了一种基于稀疏度的虹膜图像选择及其后续识别算法。与大多数现有的虹膜图像选择算法不同,我们的方法可以处理虹膜图像捕获中常见的分割错误和更广泛的采集伪影。我们在一个步骤中执行选择和识别,这比为两者设计单独的专门算法更有效。部分合作用户的识别是在各种应用中部署虹膜系统的重要一步。
{"title":"Sparsity inspired selection and recognition of iris images","authors":"Jaishanker K. Pillai, Vishal M. Patel, R. Chellappa","doi":"10.1109/BTAS.2009.5339067","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339067","url":null,"abstract":"Iris images acquired from a partially cooperating subject often suffer from blur, occlusion due to eyelids, and specular reflections. The performance of existing iris recognition systems degrade significantly on these images. Hence it is essential to select good images from the incoming iris video stream, before they are input to the recognition algorithm. In this paper, we propose a sparsity based algorithm for selection of good iris images and their subsequent recognition. Unlike most existing algorithms for iris image selection, our method can handle segmentation errors and a wider range of acquisition artifacts common in iris image capture. We perform selection and recognition in a single step which is more efficient than devising separate specialized algorithms for the two. Recognition from partially cooperating users is a significant step towards deploying iris systems in a wide variety of applications.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131461147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
PSO versus AdaBoost for feature selection in multimodal biometrics PSO与AdaBoost在多模态生物识别中的特征选择
Ramachandra Raghavendra, B. Dorizzi, A. Rao, G. Hemantha
In this paper, we present an efficient feature level fusion scheme that we apply on face and palmprint images. The features for each modality are obtained using Log Gabor transform and concatenated to form a fused feature vector. We then use Particle Swarm Optimization (PSO) scheme to reduce the dimension of this vector. Final classification is performed on the projection space of the selected features using Kernel Direct Discriminant Analysis (KDDA). Extensive experiments are carried out on a virtual multimodal biometric database of 250 users built from the face FRGC and the palmprint PolyU databases. We compare the proposed selection method with the well known Adaptive Boosting (AdaBoost) method in terms of both number of features selected and performance. Experimental results in both closed identification and verification rates show that feature fusion improves performance over match score level fusion and also that the proposed method outperforms AdaBoost in terms of reduction of the number of features and facility of implementation.
本文提出了一种适用于人脸和掌纹图像的高效特征级融合方案。利用Log Gabor变换得到各模态的特征,并将其拼接成一个融合特征向量。然后,我们使用粒子群优化(PSO)方案来降低向量的维数。最后使用核直接判别分析(KDDA)对所选特征的投影空间进行分类。在一个由250名用户组成的虚拟多模态生物特征数据库上进行了大量的实验,该数据库由面部面部特征识别数据库和掌纹数据库组成。我们将所提出的选择方法与众所周知的自适应增强(AdaBoost)方法在选择的特征数量和性能方面进行了比较。封闭识别率和验证率的实验结果表明,特征融合比匹配分数级融合性能更好,并且在特征数量减少和易于实现方面优于AdaBoost方法。
{"title":"PSO versus AdaBoost for feature selection in multimodal biometrics","authors":"Ramachandra Raghavendra, B. Dorizzi, A. Rao, G. Hemantha","doi":"10.1109/BTAS.2009.5339039","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339039","url":null,"abstract":"In this paper, we present an efficient feature level fusion scheme that we apply on face and palmprint images. The features for each modality are obtained using Log Gabor transform and concatenated to form a fused feature vector. We then use Particle Swarm Optimization (PSO) scheme to reduce the dimension of this vector. Final classification is performed on the projection space of the selected features using Kernel Direct Discriminant Analysis (KDDA). Extensive experiments are carried out on a virtual multimodal biometric database of 250 users built from the face FRGC and the palmprint PolyU databases. We compare the proposed selection method with the well known Adaptive Boosting (AdaBoost) method in terms of both number of features selected and performance. Experimental results in both closed identification and verification rates show that feature fusion improves performance over match score level fusion and also that the proposed method outperforms AdaBoost in terms of reduction of the number of features and facility of implementation.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124416089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking 基于粗精细曲率分析的旋转不变三维人脸地标
Przemyslaw Szeptycki, M. Ardabilian, Liming Chen
Automatic 2.5D face landmarking aims at locating facial feature points on 2.5D face models, such as eye corners, nose tip, etc. and has many applications ranging from face registration to facial expression recognition. In this paper, we propose a rotation invariant 2.5D face landmarking solution based on facial curvature analysis combined with a generic 2.5D face model and make use of a coarse-to-fine strategy for more accurate facial feature points localization. Experimented on more than 1600 face models randomly selected from the FRGC dataset, our technique displays, compared to a ground truth from a manual 3D face landmarking, a 100% of good nose tip localization in 8 mm precision and 100% of good localization for the eye inner corner in 12 mm precision.
自动2.5D人脸地标定位旨在定位2.5D人脸模型上的面部特征点,如眼角、鼻尖等,具有从人脸注册到面部表情识别等多种应用。本文提出了一种基于人脸曲率分析的旋转不变2.5D人脸标记方法,并结合通用2.5D人脸模型,利用从粗到精的策略进行更精确的人脸特征点定位。在从FRGC数据集中随机选择的1600多个人脸模型上进行实验,我们的技术显示,与手动3D人脸标记的基础事实相比,在8毫米精度下,鼻尖的100%良好定位,在12毫米精度下,眼睛内角的100%良好定位。
{"title":"A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking","authors":"Przemyslaw Szeptycki, M. Ardabilian, Liming Chen","doi":"10.1109/BTAS.2009.5339052","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339052","url":null,"abstract":"Automatic 2.5D face landmarking aims at locating facial feature points on 2.5D face models, such as eye corners, nose tip, etc. and has many applications ranging from face registration to facial expression recognition. In this paper, we propose a rotation invariant 2.5D face landmarking solution based on facial curvature analysis combined with a generic 2.5D face model and make use of a coarse-to-fine strategy for more accurate facial feature points localization. Experimented on more than 1600 face models randomly selected from the FRGC dataset, our technique displays, compared to a ground truth from a manual 3D face landmarking, a 100% of good nose tip localization in 8 mm precision and 100% of good localization for the eye inner corner in 12 mm precision.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132276331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 127
Medical biometrics: The perils of ignoring time dependency 医学生物计量学:忽视时间依赖性的危险
Foteini Agrafioti, F. Bui, D. Hatzinakos
The electrocardiogram (ECG) is a medical signal that has lately drawn interest from the biometrics community, and has been shown to have significantly discriminative characteristics in a population. This paper brings to light the particular challenges of electrocardiogram recognition to advocate that time dependency is a controversial point. In contrast to traditional biometrics, ECG allows for continuous authentication and consequently expands the range of applications. However, time varying biometrics put on the line the recognition accuracy due to increased intra subject variability. This paper suggests a novel framework for bypassing this inadequacy. A template update methodology is proposed and demonstrated to boost the recognition performance over 2 hour recordings of 10 subjects.
心电图(ECG)是一种医学信号,最近引起了生物识别界的兴趣,并已被证明在人群中具有显著的区别特征。本文揭示了心电图识别的特殊挑战,主张时间依赖性是一个有争议的问题。与传统的生物识别技术相比,ECG允许连续认证,从而扩大了应用范围。然而,时变生物识别由于增加了主体内部的可变性而使识别的准确性受到威胁。本文提出了一种绕过这一不足的新框架。提出并演示了一种模板更新方法,以提高对10个主题的2小时录音的识别性能。
{"title":"Medical biometrics: The perils of ignoring time dependency","authors":"Foteini Agrafioti, F. Bui, D. Hatzinakos","doi":"10.1109/BTAS.2009.5339042","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339042","url":null,"abstract":"The electrocardiogram (ECG) is a medical signal that has lately drawn interest from the biometrics community, and has been shown to have significantly discriminative characteristics in a population. This paper brings to light the particular challenges of electrocardiogram recognition to advocate that time dependency is a controversial point. In contrast to traditional biometrics, ECG allows for continuous authentication and consequently expands the range of applications. However, time varying biometrics put on the line the recognition accuracy due to increased intra subject variability. This paper suggests a novel framework for bypassing this inadequacy. A template update methodology is proposed and demonstrated to boost the recognition performance over 2 hour recordings of 10 subjects.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133571910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Generating provably secure cancelable fingerprint templates based on correlation-invariant random filtering 基于相关不变随机滤波生成可证明安全的可取消指纹模板
Kenta Takahashi, Shinji Hirata Hitachi
Biometric authentication has attracted attention because of its high security and convenience. However, biometric feature such as fingerprint can not be revoked like passwords. Thus once the biometric data of a user stored in the system has been compromised, it can not be used for authentication securely for his/her whole life long. To address this issue, an authentication scheme called cancelable biometrics has been studied. However, there remains a major challenge to achieve both strong security and practical accuracy. In this paper, we propose new methods for generating cancelable fingerprint templates with provable security based on the well-known chip matching algorithm for fingerprint verification and correlation-invariant random filtering for transforming templates. Experimental evaluation shows that our methods can be applied to fingerprint authentication without much loss in accuracy compared with the conventional chip matching algorithm.
生物特征认证因其安全性高、便捷性好而备受关注。然而,指纹等生物特征不能像密码一样被撤销。因此,一旦存储在系统中的用户生物特征数据被泄露,就无法在其一生中安全地用于身份验证。为了解决这个问题,研究了一种称为可取消生物识别的身份验证方案。然而,实现强大的安全性和实用的准确性仍然是一个重大挑战。在本文中,我们提出了基于众所周知的指纹验证芯片匹配算法和转换模板的相关不变随机滤波的可取消指纹模板的新方法。实验结果表明,该方法与传统的芯片匹配算法相比,在不降低准确率的前提下,可以应用于指纹认证。
{"title":"Generating provably secure cancelable fingerprint templates based on correlation-invariant random filtering","authors":"Kenta Takahashi, Shinji Hirata Hitachi","doi":"10.1109/BTAS.2009.5339047","DOIUrl":"https://doi.org/10.1109/BTAS.2009.5339047","url":null,"abstract":"Biometric authentication has attracted attention because of its high security and convenience. However, biometric feature such as fingerprint can not be revoked like passwords. Thus once the biometric data of a user stored in the system has been compromised, it can not be used for authentication securely for his/her whole life long. To address this issue, an authentication scheme called cancelable biometrics has been studied. However, there remains a major challenge to achieve both strong security and practical accuracy. In this paper, we propose new methods for generating cancelable fingerprint templates with provable security based on the well-known chip matching algorithm for fingerprint verification and correlation-invariant random filtering for transforming templates. Experimental evaluation shows that our methods can be applied to fingerprint authentication without much loss in accuracy compared with the conventional chip matching algorithm.","PeriodicalId":325900,"journal":{"name":"2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2009-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134629265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
期刊
2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1