首页 > 最新文献

2015 International Conference on Biometrics (ICB)最新文献

英文 中文
Swipe gesture based Continuous Authentication for mobile devices 基于滑动手势的移动设备连续认证
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139110
Soumik Mondal, Patrick A. H. Bours
In this research, we investigated the performance of a continuous biometric authentication system for mobile devices under various different analysis techniques. We tested these on a publicly available swipe gestures database with 71 users, but the techniques can also be applied to other biometric modalities in a continuous setting. The best result obtained in this research is that (1) none of the 71 genuine users is lockout from the system; (2) for 68 users we require on average 4 swipe gestures to detect an imposter; (3) for the remaining 3 genuine users, on average 14 swipes are required while 4 impostors are not detected.
在本研究中,我们研究了在各种不同分析技术下移动设备连续生物识别认证系统的性能。我们在一个有71名用户的公开的滑动手势数据库中测试了这些技术,但这些技术也可以应用于其他连续设置的生物识别模式。本研究得到的最佳结果是:(1)71个真实用户均未被系统锁定;(2)对于68个用户,我们平均需要4个滑动手势来检测冒名顶替者;(3)其余3名正版用户平均刷14次,未检出4名冒牌用户。
{"title":"Swipe gesture based Continuous Authentication for mobile devices","authors":"Soumik Mondal, Patrick A. H. Bours","doi":"10.1109/ICB.2015.7139110","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139110","url":null,"abstract":"In this research, we investigated the performance of a continuous biometric authentication system for mobile devices under various different analysis techniques. We tested these on a publicly available swipe gestures database with 71 users, but the techniques can also be applied to other biometric modalities in a continuous setting. The best result obtained in this research is that (1) none of the 71 genuine users is lockout from the system; (2) for 68 users we require on average 4 swipe gestures to detect an imposter; (3) for the remaining 3 genuine users, on average 14 swipes are required while 4 impostors are not detected.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133854212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Fast and robust self-training beard/moustache detection and segmentation 快速和鲁棒的自我训练胡子/小胡子检测和分割
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139066
T. Le, Khoa Luu, M. Savvides
Facial hair detection and segmentation play an important role in forensic facial analysis. In this paper, we propose a fast, robust, fully automatic and self-training system for beard/moustache detection and segmentation in challenging facial images. In order to overcome the limitations of illumination, facial hair color and near-clear shaving, our facial hair detection self-learns a transformation vector to separate a hair class and a non-hair class from the testing image itself. A feature vector, consisting of Histogram of Gabor (HoG) and Histogram of Oriented Gradient of Gabor (HOGG) at different directions and frequencies, is proposed for both beard/moustache detection and segmentation in this paper. A feature-based segmentation is then proposed to segment the beard/moustache from a region on the face that is discovered to contain facial hair. Experimental results have demonstrated the robustness and effectiveness of our proposed system in detecting and segmenting facial hair in images drawn from three entire databases i.e. the Multiple Biometric Grand Challenge (MBGC) still face database, the NIST color Facial Recognition Technology FERET database and a large subset from Pinellas County database.
面部毛发的检测与分割在法医面部分析中起着重要的作用。在本文中,我们提出了一种快速、鲁棒、全自动和自我训练的胡须检测和分割系统。为了克服光照、面部毛发颜色和近乎清晰剃须的限制,我们的面部毛发检测自学习了一个变换向量,从测试图像本身分离毛发类和非毛发类。本文提出了一种由不同方向和频率的Gabor直方图(HoG)和Gabor梯度直方图(HOGG)组成的特征向量,用于胡须检测和分割。然后提出了一种基于特征的分割方法,从面部发现含有面部毛发的区域中分割出胡须。实验结果证明了我们提出的系统在检测和分割面部毛发方面的鲁棒性和有效性,这些图像来自三个完整的数据库,即多重生物识别大挑战(MBGC)仍然人脸数据库,NIST彩色面部识别技术FERET数据库和来自皮内拉斯县数据库的一个大子集。
{"title":"Fast and robust self-training beard/moustache detection and segmentation","authors":"T. Le, Khoa Luu, M. Savvides","doi":"10.1109/ICB.2015.7139066","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139066","url":null,"abstract":"Facial hair detection and segmentation play an important role in forensic facial analysis. In this paper, we propose a fast, robust, fully automatic and self-training system for beard/moustache detection and segmentation in challenging facial images. In order to overcome the limitations of illumination, facial hair color and near-clear shaving, our facial hair detection self-learns a transformation vector to separate a hair class and a non-hair class from the testing image itself. A feature vector, consisting of Histogram of Gabor (HoG) and Histogram of Oriented Gradient of Gabor (HOGG) at different directions and frequencies, is proposed for both beard/moustache detection and segmentation in this paper. A feature-based segmentation is then proposed to segment the beard/moustache from a region on the face that is discovered to contain facial hair. Experimental results have demonstrated the robustness and effectiveness of our proposed system in detecting and segmenting facial hair in images drawn from three entire databases i.e. the Multiple Biometric Grand Challenge (MBGC) still face database, the NIST color Facial Recognition Technology FERET database and a large subset from Pinellas County database.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114359182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Combining view-based pose normalization and feature transform for cross-pose face recognition 结合基于视图的姿态归一化和特征变换的交叉姿态人脸识别
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139114
Hua Gao, H. K. Ekenel, R. Stiefelhagen
Automatic face recognition across large pose changes is still a challenging problem. Previous solutions apply a transform in image space or feature space for normalizing the pose mismatch. For feature transform, the feature vector extracted on a probe facial image is transferred to match the gallery condition with regression models. Usually, the regression models are learned from paired gallery-probe conditions, in which pose angles are known or accurately estimated. The solution based on image transform is able to handle continuous pose changes, yet the approach suffers from warping artifacts due to misalignment and self-occlusion. In this work, we propose a novel approach, which combines the advantage of both methods. The algorithm is able to handle continuous pose mismatch in gallery and probe set, mitigating the impact of inaccurate pose estimation in feature-transform-based method. We evaluate the proposed algorithm on the FERET face database, where the pose angles are roughly annotated. Experimental results show that our proposed method is superior to solely image/feature transform methods, especially when the pose angle difference is large.
大姿态变化下的人脸自动识别仍然是一个具有挑战性的问题。以前的解决方案是在图像空间或特征空间中进行变换来规范姿态不匹配。特征变换是将探测人脸图像上提取的特征向量与回归模型进行匹配。通常,回归模型是从成对的通道探针条件中学习的,其中姿态角是已知的或准确估计的。基于图像变换的解决方案能够处理连续的姿态变化,但该方法由于不对齐和自遮挡而存在扭曲伪影。在这项工作中,我们提出了一种新的方法,它结合了两种方法的优点。该算法能够处理图像库和探测集的连续位姿不匹配,减轻了基于特征变换的方法中位姿估计不准确的影响。我们在FERET人脸数据库上评估了所提出的算法,其中姿态角被粗略标注。实验结果表明,该方法在位姿角差较大的情况下优于单纯的图像/特征变换方法。
{"title":"Combining view-based pose normalization and feature transform for cross-pose face recognition","authors":"Hua Gao, H. K. Ekenel, R. Stiefelhagen","doi":"10.1109/ICB.2015.7139114","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139114","url":null,"abstract":"Automatic face recognition across large pose changes is still a challenging problem. Previous solutions apply a transform in image space or feature space for normalizing the pose mismatch. For feature transform, the feature vector extracted on a probe facial image is transferred to match the gallery condition with regression models. Usually, the regression models are learned from paired gallery-probe conditions, in which pose angles are known or accurately estimated. The solution based on image transform is able to handle continuous pose changes, yet the approach suffers from warping artifacts due to misalignment and self-occlusion. In this work, we propose a novel approach, which combines the advantage of both methods. The algorithm is able to handle continuous pose mismatch in gallery and probe set, mitigating the impact of inaccurate pose estimation in feature-transform-based method. We evaluate the proposed algorithm on the FERET face database, where the pose angles are roughly annotated. Experimental results show that our proposed method is superior to solely image/feature transform methods, especially when the pose angle difference is large.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123117225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Attribute preserved face de-identification 属性保留人脸去识别
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139096
Amin Jourabloo, Xi Yin, Xiaoming Liu
In this paper, we recognize the need of de-identifying a face image while preserving a large set of facial attributes, which has not been explicitly studied before. We verify the underling assumption that different visual features are used for identification and attribute classification. As a result, the proposed approach jointly models face de-identification and attribute preservation in a unified optimization framework. Specifically, a face image is represented by the shape and appearance parameters of AAM. Motivated by k-Same, we select k images that share the most similar attributes with those of a test image. Instead of using the average of k images, adopted by k-Same methods, we formulate an objective function and use gradient descent to learn the optimal weights for fusing k images. Experimental results show that our proposed approach performs substantially better than the baseline method with a lower face recognition rate, while preserving more facial attributes.
在本文中,我们认识到需要在保留大量面部属性的同时去识别人脸图像,这在以前没有明确研究过。我们验证了下面的假设,即不同的视觉特征被用于识别和属性分类。因此,该方法联合模型在统一的优化框架下面临去识别和属性保留。具体而言,人脸图像由AAM的形状和外观参数表示。在k- same的激励下,我们选择k个与测试图像具有最相似属性的图像。与k- same方法使用k图像的平均值不同,我们建立了一个目标函数,并使用梯度下降来学习k图像融合的最优权重。实验结果表明,该方法在保留更多的人脸属性的同时,具有较低的人脸识别率,性能明显优于基线方法。
{"title":"Attribute preserved face de-identification","authors":"Amin Jourabloo, Xi Yin, Xiaoming Liu","doi":"10.1109/ICB.2015.7139096","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139096","url":null,"abstract":"In this paper, we recognize the need of de-identifying a face image while preserving a large set of facial attributes, which has not been explicitly studied before. We verify the underling assumption that different visual features are used for identification and attribute classification. As a result, the proposed approach jointly models face de-identification and attribute preservation in a unified optimization framework. Specifically, a face image is represented by the shape and appearance parameters of AAM. Motivated by k-Same, we select k images that share the most similar attributes with those of a test image. Instead of using the average of k images, adopted by k-Same methods, we formulate an objective function and use gradient descent to learn the optimal weights for fusing k images. Experimental results show that our proposed approach performs substantially better than the baseline method with a lower face recognition rate, while preserving more facial attributes.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121725922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
A biomechanical approach to iris normalization 虹膜归一化的生物力学方法
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139041
Inmaculada Tomeo-Reyes, A. Ross, A. Clark, V. Chandran
The richness of the iris texture and its variability across individuals make it a useful biometric trait for personal authentication. One of the key stages in classical iris recognition is the normalization process, where the annular iris region is mapped to a dimensionless pseudo-polar coordinate system. This process results in a rectangular structure that can be used to compensate for differences in scale and variations in pupil size. Most iris recognition methods in the literature adopt linear sampling in the radial and angular directions when performing iris normalization. In this paper, a biomechanical model of the iris is used to define a novel nonlinear normalization scheme that improves iris recognition accuracy under different degrees of pupil dilation. The proposed biomechanical model is used to predict the radial displacement of any point in the iris at a given dilation level, and this information is incorporated in the normalization process. Experimental results on the WVU pupil light reflex database (WVU-PLR) indicate the efficacy of the proposed technique, especially when matching iris images with large differences in pupil size.
虹膜纹理的丰富性及其在个体之间的可变性使其成为一种有用的个人身份验证生物特征。经典虹膜识别的关键阶段之一是归一化过程,将环形虹膜区域映射到无量纲伪极坐标系。这一过程产生了一个矩形结构,可以用来补偿尺度和瞳孔大小的差异。文献中的虹膜识别方法在进行虹膜归一化时,大多采用径向和角向的线性采样。本文利用虹膜的生物力学模型定义了一种新的非线性归一化方案,提高了不同瞳孔扩张程度下虹膜的识别精度。所提出的生物力学模型用于预测虹膜中任意点在给定扩张水平下的径向位移,并将该信息纳入归一化过程。在WVU瞳孔光反射数据库(WVU- plr)上的实验结果表明了该方法的有效性,特别是在匹配瞳孔大小差异较大的虹膜图像时。
{"title":"A biomechanical approach to iris normalization","authors":"Inmaculada Tomeo-Reyes, A. Ross, A. Clark, V. Chandran","doi":"10.1109/ICB.2015.7139041","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139041","url":null,"abstract":"The richness of the iris texture and its variability across individuals make it a useful biometric trait for personal authentication. One of the key stages in classical iris recognition is the normalization process, where the annular iris region is mapped to a dimensionless pseudo-polar coordinate system. This process results in a rectangular structure that can be used to compensate for differences in scale and variations in pupil size. Most iris recognition methods in the literature adopt linear sampling in the radial and angular directions when performing iris normalization. In this paper, a biomechanical model of the iris is used to define a novel nonlinear normalization scheme that improves iris recognition accuracy under different degrees of pupil dilation. The proposed biomechanical model is used to predict the radial displacement of any point in the iris at a given dilation level, and this information is incorporated in the normalization process. Experimental results on the WVU pupil light reflex database (WVU-PLR) indicate the efficacy of the proposed technique, especially when matching iris images with large differences in pupil size.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126540330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Latent fingerprint match using Minutia Spherical Coordinate Code 潜在指纹匹配使用Minutia球面坐标代码
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139061
Fengde Zheng, Chunyu Yang
This paper proposes a fingerprint match algorithm using Minutia Spherical Coordinate Code (MSCC). This algorithm is a modified version of Minutia Cylinder Code (MCC). The advantage of this algorithm is its compact feature representation. Binary vector of every minutia only needs 288 bits, while MCC needs 448 or 1792 bits. This algorithm also uses a greedy alignment approach which can rediscover minutiae pairs lost in original stage. Experiments on AFIS data and NIST special data27 demonstrate the effectiveness of the proposed approach. We compare this algorithm to MCC. The experiments show that MSCC has better matching accuracy than MCC. The average compressed feature size is 2.3 Kbytes, while the average compressed feature size of MCC is 4.84 Kbytes in NIST SD27.
提出了一种基于微球坐标码(MSCC)的指纹匹配算法。该算法是一种改进版的Minutia圆柱体代码(MCC)。该算法的优点是特征表示紧凑。每个细节的二进制矢量只需要288位,而MCC则需要448位或1792位。该算法还采用了贪婪对齐方法,可以重新发现原始阶段丢失的细节对。在AFIS数据和NIST专用数据27上的实验证明了该方法的有效性。我们将此算法与MCC进行比较。实验表明,MSCC比MCC具有更好的匹配精度。平均压缩特征大小为2.3 Kbytes,而NIST SD27中MCC的平均压缩特征大小为4.84 Kbytes。
{"title":"Latent fingerprint match using Minutia Spherical Coordinate Code","authors":"Fengde Zheng, Chunyu Yang","doi":"10.1109/ICB.2015.7139061","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139061","url":null,"abstract":"This paper proposes a fingerprint match algorithm using Minutia Spherical Coordinate Code (MSCC). This algorithm is a modified version of Minutia Cylinder Code (MCC). The advantage of this algorithm is its compact feature representation. Binary vector of every minutia only needs 288 bits, while MCC needs 448 or 1792 bits. This algorithm also uses a greedy alignment approach which can rediscover minutiae pairs lost in original stage. Experiments on AFIS data and NIST special data27 demonstrate the effectiveness of the proposed approach. We compare this algorithm to MCC. The experiments show that MSCC has better matching accuracy than MCC. The average compressed feature size is 2.3 Kbytes, while the average compressed feature size of MCC is 4.84 Kbytes in NIST SD27.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129687668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Discriminative regularized metric learning for person re-identification 人再识别的判别正则度量学习
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139075
Venice Erin Liong, Yongxin Ge, Jiwen Lu
Person re-identification aims to match people across non-overlapping cameras, and recent advances have shown that metric learning is an effective technique for person re-identification. However, most existing metric learning methods suffer from the small sample size (SSS) problem due to the limited amount of labeled training samples. In this paper, we propose a new discriminative regularized metric learning (DRML) method for person re-identification. Specifically, we exploit discriminative information of training samples to regulate the eigenvalues of the intra-class and inter-class covariance matrices so that the distance metric estimated is less biased. Experimental results on three widely used datasets validate the effectiveness of our proposed method for person re-identification.
人物再识别的目标是在不重叠的摄像机上匹配人物,最近的进展表明度量学习是一种有效的人物再识别技术。然而,现有的度量学习方法由于标记的训练样本数量有限,存在小样本问题。本文提出了一种新的判别正则化度量学习(DRML)方法用于人的再识别。具体来说,我们利用训练样本的判别信息来调节类内和类间协方差矩阵的特征值,从而使估计的距离度量偏差较小。在三个广泛使用的数据集上的实验结果验证了该方法的有效性。
{"title":"Discriminative regularized metric learning for person re-identification","authors":"Venice Erin Liong, Yongxin Ge, Jiwen Lu","doi":"10.1109/ICB.2015.7139075","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139075","url":null,"abstract":"Person re-identification aims to match people across non-overlapping cameras, and recent advances have shown that metric learning is an effective technique for person re-identification. However, most existing metric learning methods suffer from the small sample size (SSS) problem due to the limited amount of labeled training samples. In this paper, we propose a new discriminative regularized metric learning (DRML) method for person re-identification. Specifically, we exploit discriminative information of training samples to regulate the eigenvalues of the intra-class and inter-class covariance matrices so that the distance metric estimated is less biased. Experimental results on three widely used datasets validate the effectiveness of our proposed method for person re-identification.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124533161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Exploring dorsal finger vein pattern for robust person recognition 探索手指背静脉模式对人的鲁棒识别
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139059
Ramachandra Raghavendra, C. Busch
Finger vein based biometric recognition has increasingly generated interest amongst biometric researchers because of the accuracy, robustness and anti-spoofing propertie. Prior efforts the are documented in the finger vein biometrics literature have only investigated the ventral vein pattern that is formed on the lower part of the finger underneath the skin surface. This paper investigates a new finger vein biometric approach by exploring the vein pattern that is present in the dorsal finger region. Thus, the dorsal finger vein pattern can be used as an independent biometric characteristic useful for the recognition of the target subject. We presented a complete automated approach with the key steps of image capturing, Region of Interest (ROI) extraction, pre-processing to enhance the vein pattern, feature extraction and comparison. This paper also introduces a new database of dorsal finger vein patterns from 125 subjects that resulted in 500 unique fingers with 10 samples each that results in a total of 5000 dorsal finger vein samples. Extensive experiments carried out on our new dorsal finger vein database achieve promising accuracy and thereby provide new insights on this new biometric approach.
基于手指静脉的生物特征识别因其准确性、鲁棒性和抗欺骗性而日益引起生物识别研究人员的兴趣。先前的研究成果记录在手指静脉生物计量学文献中,只研究了在皮肤表面下手指下部形成的腹静脉模式。本文研究了一种新的手指静脉生物识别方法,通过探索存在于手指背区的静脉模式。因此,手指背静脉模式可以作为一种独立的生物特征,用于识别目标对象。我们提出了一种完整的自动化方法,其关键步骤包括图像捕获、感兴趣区域(ROI)提取、预处理增强静脉模式、特征提取和比较。本文还介绍了一个来自125个受试者的手指背静脉模式的新数据库,该数据库产生了500个独特的手指,每个手指有10个样本,总共有5000个手指背静脉样本。在我们新的手指背静脉数据库上进行的大量实验取得了很好的准确性,从而为这种新的生物识别方法提供了新的见解。
{"title":"Exploring dorsal finger vein pattern for robust person recognition","authors":"Ramachandra Raghavendra, C. Busch","doi":"10.1109/ICB.2015.7139059","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139059","url":null,"abstract":"Finger vein based biometric recognition has increasingly generated interest amongst biometric researchers because of the accuracy, robustness and anti-spoofing propertie. Prior efforts the are documented in the finger vein biometrics literature have only investigated the ventral vein pattern that is formed on the lower part of the finger underneath the skin surface. This paper investigates a new finger vein biometric approach by exploring the vein pattern that is present in the dorsal finger region. Thus, the dorsal finger vein pattern can be used as an independent biometric characteristic useful for the recognition of the target subject. We presented a complete automated approach with the key steps of image capturing, Region of Interest (ROI) extraction, pre-processing to enhance the vein pattern, feature extraction and comparison. This paper also introduces a new database of dorsal finger vein patterns from 125 subjects that resulted in 500 unique fingers with 10 samples each that results in a total of 5000 dorsal finger vein samples. Extensive experiments carried out on our new dorsal finger vein database achieve promising accuracy and thereby provide new insights on this new biometric approach.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115860242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Band selection for Gabor feature based hyperspectral palmprint recognition 基于Gabor特征的高光谱掌纹识别波段选择
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139104
L. Shen, Ziyi Dai, Sen Jia, Meng Yang, Zhihui Lai, Shiqi Yu
Hyperspectral imaging has recently been introduced into face and palmprint recognition and is now drawing much attention of researchers in this area. Compared to simple 2D imaging technology, hyperspectral image can bring much more information. Due to its ablity to jointly explore the spatial-spectral domain, 3D Gabor wavelets have been successfully applied for hyperspectral palmprint recognition. In this approach, a set of 52 three-dimensional Gabor wavelets with different frequencies and orientations were designed and convolved with the cube to extract discriminative information in the joint spatial-spectral domain. However, there is also much redundancy among the hyperpecstral data, which makes the feature extraction computationally expensive. In this paper, we propose to use AP (affinity propagation) based clustering approach to select representative band images from available large data. As the number of bands has been greatly reduced, the feature extraction process can be efficiently speed up. Experimental results on the publicly available HK-PolyU hyperspectral palmprint database show that the proposed approach not only improves the efficiency, but also reduces the EER of 3D Gabor feature based method from 4% to 3.26%.
高光谱成像技术近年来被引入到人脸和掌纹识别中,引起了研究人员的广泛关注。与简单的二维成像技术相比,高光谱图像可以提供更多的信息。三维Gabor小波由于具有联合探索空间-光谱域的能力,已成功应用于高光谱掌纹识别。该方法设计了52个不同频率和方向的三维Gabor小波,并与立方体进行卷积,提取空间-频谱联合域中的判别信息。然而,超频谱数据之间也存在大量的冗余,这使得特征提取的计算成本很高。在本文中,我们提出使用基于AP(亲和力传播)的聚类方法从可用的大数据中选择具有代表性的带图像。由于条带数量大大减少,可以有效地加快特征提取过程。在公开的香港理工大学高光谱掌纹数据库上的实验结果表明,该方法不仅提高了效率,而且将基于三维Gabor特征的方法的EER从4%降低到3.26%。
{"title":"Band selection for Gabor feature based hyperspectral palmprint recognition","authors":"L. Shen, Ziyi Dai, Sen Jia, Meng Yang, Zhihui Lai, Shiqi Yu","doi":"10.1109/ICB.2015.7139104","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139104","url":null,"abstract":"Hyperspectral imaging has recently been introduced into face and palmprint recognition and is now drawing much attention of researchers in this area. Compared to simple 2D imaging technology, hyperspectral image can bring much more information. Due to its ablity to jointly explore the spatial-spectral domain, 3D Gabor wavelets have been successfully applied for hyperspectral palmprint recognition. In this approach, a set of 52 three-dimensional Gabor wavelets with different frequencies and orientations were designed and convolved with the cube to extract discriminative information in the joint spatial-spectral domain. However, there is also much redundancy among the hyperpecstral data, which makes the feature extraction computationally expensive. In this paper, we propose to use AP (affinity propagation) based clustering approach to select representative band images from available large data. As the number of bands has been greatly reduced, the feature extraction process can be efficiently speed up. Experimental results on the publicly available HK-PolyU hyperspectral palmprint database show that the proposed approach not only improves the efficiency, but also reduces the EER of 3D Gabor feature based method from 4% to 3.26%.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116292089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Palm region extraction for contactless palmprint recognition 面向非接触式掌纹识别的掌纹区域提取
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139058
Koichi Ito, Takuto Sato, Shoichiro Aoyama, S. Sakai, Shusaku Yusa, T. Aoki
Palm region extraction is one of the most important processes in palmprint recognition, since the accuracy of extracted palm regions has a significant impact on recognition performance. Especially in contactless recognition systems, a palm region has to be extracted from a palm image by taking into consideration a variety of hand poses. Most conventional methods of palm region extraction assume that all the fingers are spread and a palm faces to a camera. This assumption forces users to locate his/her hand with limited pose and position, resulting in impairing the flexibility of the contactless palmprint recognition system. Addressing the above problem, this paper proposes a novel palm region extraction method robust against hand pose. Through a set of experiments using our databases which contains palm images with different hand pose and the public database, we demonstrate that the proposed method exhibits efficient performance compared with conventional methods.
掌纹区域提取是掌纹识别中最重要的过程之一,掌纹区域提取的准确性对识别性能有着重要的影响。特别是在非接触式识别系统中,必须通过考虑各种手部姿势来从手掌图像中提取手掌区域。大多数传统的手掌区域提取方法假设所有的手指都是摊开的,手掌面向相机。这种假设迫使用户在有限的姿势和位置下定位他/她的手,从而损害了非接触式掌纹识别系统的灵活性。针对上述问题,本文提出了一种新的手掌区域提取方法。通过我们的不同手姿掌纹数据库和公共数据库的实验,我们证明了该方法与传统方法相比具有有效的性能。
{"title":"Palm region extraction for contactless palmprint recognition","authors":"Koichi Ito, Takuto Sato, Shoichiro Aoyama, S. Sakai, Shusaku Yusa, T. Aoki","doi":"10.1109/ICB.2015.7139058","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139058","url":null,"abstract":"Palm region extraction is one of the most important processes in palmprint recognition, since the accuracy of extracted palm regions has a significant impact on recognition performance. Especially in contactless recognition systems, a palm region has to be extracted from a palm image by taking into consideration a variety of hand poses. Most conventional methods of palm region extraction assume that all the fingers are spread and a palm faces to a camera. This assumption forces users to locate his/her hand with limited pose and position, resulting in impairing the flexibility of the contactless palmprint recognition system. Addressing the above problem, this paper proposes a novel palm region extraction method robust against hand pose. Through a set of experiments using our databases which contains palm images with different hand pose and the public database, we demonstrate that the proposed method exhibits efficient performance compared with conventional methods.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124821490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2015 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1