首页 > 最新文献

2015 International Conference on Biometrics (ICB)最新文献

英文 中文
Unconstrained face detection: State of the art baseline and challenges 无约束人脸检测:最新的基线和挑战
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139089
J. Cheney, Benjamin Klein, Anil K. Jain, Brendan Klare
A large scale study of the accuracy and efficiency of face detection algorithms on unconstrained face imagery is presented. Nine different face detection algorithms are studied, which are acquired through either government rights, open source, or commercial licensing. The primary data set utilized for analysis is the IAPRA Janus Benchmark A (IJB-A), a recently released unconstrained face detection and recognition dataset which, at the time of this study, contained 67,183 manually localized faces in 5,712 images and 20,408 video frames. The goal of the study is to determine the state of the art in face detection with respect to unconstrained imagery which is motivated by the saturation of recognition accuracies on seminal unconstrained face recognition datasets which are filtered to only contain faces detectable by a commodity face detection algorithm. The most notable finding from this study is that top performing detectors still fail to detect the vast majority of faces with extreme pose, partial occlusion, and/or poor illumination. In total, over 20% of faces fail to be detected by all nine detectors studied. The speed of the detectors was generally correlated with accuracy: faster detectors were less accurate than their slower counterparts. Finally, key considerations and guidance is provided for performing face detection evaluations. All software using these methods to conduct the evaluations and plot the accuracies are made available in the open source.
对无约束人脸图像下人脸检测算法的精度和效率进行了大规模的研究。研究了九种不同的人脸检测算法,这些算法通过政府权利、开源或商业许可获得。用于分析的主要数据集是IAPRA Janus Benchmark A (IJB-A),这是一个最近发布的无约束人脸检测和识别数据集,在本研究时,该数据集包含5712张图像和20408个视频帧中的67183张手动定位的人脸。本研究的目的是确定关于无约束图像的人脸检测技术的现状,这是由开创性的无约束人脸识别数据集的识别精度饱和所驱动的,这些数据集被过滤为仅包含可被商品人脸检测算法检测到的人脸。这项研究最值得注意的发现是,表现最好的检测器仍然无法检测到绝大多数极端姿势、部分遮挡和/或光照不足的人脸。总的来说,超过20%的人脸没有被研究的所有9个检测器检测到。探测器的速度通常与精度相关:速度快的探测器比速度慢的探测器精度低。最后,为进行人脸检测评估提供了关键的考虑因素和指导。所有使用这些方法进行评估和绘制精度的软件都可以在开放源码中获得。
{"title":"Unconstrained face detection: State of the art baseline and challenges","authors":"J. Cheney, Benjamin Klein, Anil K. Jain, Brendan Klare","doi":"10.1109/ICB.2015.7139089","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139089","url":null,"abstract":"A large scale study of the accuracy and efficiency of face detection algorithms on unconstrained face imagery is presented. Nine different face detection algorithms are studied, which are acquired through either government rights, open source, or commercial licensing. The primary data set utilized for analysis is the IAPRA Janus Benchmark A (IJB-A), a recently released unconstrained face detection and recognition dataset which, at the time of this study, contained 67,183 manually localized faces in 5,712 images and 20,408 video frames. The goal of the study is to determine the state of the art in face detection with respect to unconstrained imagery which is motivated by the saturation of recognition accuracies on seminal unconstrained face recognition datasets which are filtered to only contain faces detectable by a commodity face detection algorithm. The most notable finding from this study is that top performing detectors still fail to detect the vast majority of faces with extreme pose, partial occlusion, and/or poor illumination. In total, over 20% of faces fail to be detected by all nine detectors studied. The speed of the detectors was generally correlated with accuracy: faster detectors were less accurate than their slower counterparts. Finally, key considerations and guidance is provided for performing face detection evaluations. All software using these methods to conduct the evaluations and plot the accuracies are made available in the open source.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125319763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Cross-sensor iris verification applying robust fused segmentation algorithms 应用鲁棒融合分割算法的跨传感器虹膜验证
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139042
E. G. Llano, J. Colores-Vargas, M. García-Vázquez, L. M. Zamudio-Fuentes, A. A. Ramírez-Acosta
Currently, identity management systems work with heterogeneous iris images captured by different types of iris sensors. Indeed, iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain a stable iris recognition system which is effective for all type of iris sensors. This paper proposes a new cross-sensor iris recognition scheme that increases the recognition accuracy. The novelty of this work is the new strategy in applying robust fusion methods at level of segmentation stage for cross-sensor iris recognition. The experiments with the Casia-V3-Interval, Casia-V4-Thousand, Ubiris-V1 and MBGC-V2 databases show that our scheme increases the recognition accuracy and it is robust to different types of iris sensors while the user interaction is reduced.
目前,身份管理系统可以处理由不同类型的虹膜传感器捕获的异构虹膜图像。事实上,虹膜识别正在被广泛应用于不同的环境中,在这些环境中需要一个人的身份。因此,如何保持一个稳定的、对所有类型的虹膜传感器都有效的虹膜识别系统是一个具有挑战性的问题。本文提出了一种新的跨传感器虹膜识别方案,提高了识别精度。本文的新颖之处在于将鲁棒融合方法应用于跨传感器虹膜识别的分割阶段。在Casia-V3-Interval、Casia-V4-Thousand、Ubiris-V1和MBGC-V2数据库上的实验表明,该方法在降低用户交互的同时,提高了识别精度,对不同类型虹膜传感器具有较强的鲁棒性。
{"title":"Cross-sensor iris verification applying robust fused segmentation algorithms","authors":"E. G. Llano, J. Colores-Vargas, M. García-Vázquez, L. M. Zamudio-Fuentes, A. A. Ramírez-Acosta","doi":"10.1109/ICB.2015.7139042","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139042","url":null,"abstract":"Currently, identity management systems work with heterogeneous iris images captured by different types of iris sensors. Indeed, iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain a stable iris recognition system which is effective for all type of iris sensors. This paper proposes a new cross-sensor iris recognition scheme that increases the recognition accuracy. The novelty of this work is the new strategy in applying robust fusion methods at level of segmentation stage for cross-sensor iris recognition. The experiments with the Casia-V3-Interval, Casia-V4-Thousand, Ubiris-V1 and MBGC-V2 databases show that our scheme increases the recognition accuracy and it is robust to different types of iris sensors while the user interaction is reduced.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114781785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
On fusion for multispectral iris recognition 多光谱虹膜识别的融合研究
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139072
Peter Wild, P. Radu, J. Ferryman
Multispectral iris recognition uses information from multiple bands of the electromagnetic spectrum to better represent certain physiological characteristics of the iris texture and enhance obtained recognition accuracy. This paper addresses the questions of single versus cross-spectral performance and compares score-level fusion accuracy for different feature types, combining different wavelengths to overcome limitations in less constrained recording environments. Further it is investigated whether Doddington's “goats” (users who are particularly difficult to recognize) in one spectrum also extend to other spectra. Focusing on the question of feature stability at different wavelengths, this work uses manual ground truth segmentation, avoiding bias by segmentation impact. Experiments on the public UTIRIS multispectral iris dataset using 4 feature extraction techniques reveal a significant enhancement when combining NIR + Red for 2-channel and NIR + Red + Blue for 3-channel fusion, across different feature types. Selective feature-level fusion is investigated and shown to improve overall and especially cross-spectral performance without increasing the overall length of the iris code.
多光谱虹膜识别利用电磁波谱的多个波段信息,更好地表征虹膜纹理的某些生理特征,提高识别精度。本文解决了单光谱与交叉光谱性能的问题,并比较了不同特征类型的分数级融合精度,结合不同的波长来克服约束较少的记录环境中的局限性。进一步研究了Doddington在一个光谱中的“山羊”(特别难以识别的用户)是否也延伸到其他光谱。针对不同波长下的特征稳定性问题,本文采用人工地真值分割,避免了分割影响带来的偏差。在公共UTIRIS多光谱虹膜数据集上使用4种特征提取技术进行的实验表明,在不同的特征类型中,结合NIR + Red进行2通道融合和NIR + Red + Blue进行3通道融合时,具有显著的增强效果。选择性特征级融合在不增加虹膜编码总长度的情况下提高了整体性能,特别是跨光谱性能。
{"title":"On fusion for multispectral iris recognition","authors":"Peter Wild, P. Radu, J. Ferryman","doi":"10.1109/ICB.2015.7139072","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139072","url":null,"abstract":"Multispectral iris recognition uses information from multiple bands of the electromagnetic spectrum to better represent certain physiological characteristics of the iris texture and enhance obtained recognition accuracy. This paper addresses the questions of single versus cross-spectral performance and compares score-level fusion accuracy for different feature types, combining different wavelengths to overcome limitations in less constrained recording environments. Further it is investigated whether Doddington's “goats” (users who are particularly difficult to recognize) in one spectrum also extend to other spectra. Focusing on the question of feature stability at different wavelengths, this work uses manual ground truth segmentation, avoiding bias by segmentation impact. Experiments on the public UTIRIS multispectral iris dataset using 4 feature extraction techniques reveal a significant enhancement when combining NIR + Red for 2-channel and NIR + Red + Blue for 3-channel fusion, across different feature types. Selective feature-level fusion is investigated and shown to improve overall and especially cross-spectral performance without increasing the overall length of the iris code.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122142976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A verify-correct approach to person re-identification based on Partial Least Squares signatures 一种基于偏最小二乘签名的人员再识别验证方法
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139088
G. Prado, H. Pedrini, W. R. Schwartz
In the surveillance field, it is very common to have camera networks covering large crowded areas. Not rarely, cameras in these networks do not share the same field of view and they are not always calibrated. In these cases, common problems such as tracking cannot be directly applied as the information from one camera must be also consistent with the others. This is the most common scenario for the person re-identification problem, where there is the need to detect, track and keep a consistent identification of people across a network of cameras. Many approaches have been developed to solve this problem in different manners. However, person re-identification is still an open problem due to many challenges required to be addressed to build a robust system. To tackle the re-identification problem and improve the accuracy, we propose a novel approach based on Partial Least Squares signatures, which is based on the visual appearance of people. We demonstrate the method performance with experiments conducted on three public available data sets. Results show that our method overcome the chosen baseline on all data sets.
在监控领域,摄像机网络覆盖大面积拥挤区域是非常普遍的。通常情况下,这些网络中的摄像机不会共享相同的视野,而且它们并不总是经过校准。在这些情况下,不能直接应用跟踪等常见问题,因为来自一个摄像机的信息也必须与其他摄像机一致。这是人员重新识别问题最常见的场景,需要通过摄像机网络检测、跟踪和保持人员的一致识别。已经开发了许多方法以不同的方式解决这个问题。然而,由于构建一个健壮的系统需要解决许多挑战,人员重新识别仍然是一个悬而未决的问题。为了解决重复识别问题并提高识别精度,本文提出了一种基于人的视觉特征的偏最小二乘签名方法。我们通过在三个公共可用数据集上进行的实验证明了该方法的性能。结果表明,我们的方法在所有数据集上都克服了所选基线。
{"title":"A verify-correct approach to person re-identification based on Partial Least Squares signatures","authors":"G. Prado, H. Pedrini, W. R. Schwartz","doi":"10.1109/ICB.2015.7139088","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139088","url":null,"abstract":"In the surveillance field, it is very common to have camera networks covering large crowded areas. Not rarely, cameras in these networks do not share the same field of view and they are not always calibrated. In these cases, common problems such as tracking cannot be directly applied as the information from one camera must be also consistent with the others. This is the most common scenario for the person re-identification problem, where there is the need to detect, track and keep a consistent identification of people across a network of cameras. Many approaches have been developed to solve this problem in different manners. However, person re-identification is still an open problem due to many challenges required to be addressed to build a robust system. To tackle the re-identification problem and improve the accuracy, we propose a novel approach based on Partial Least Squares signatures, which is based on the visual appearance of people. We demonstrate the method performance with experiments conducted on three public available data sets. Results show that our method overcome the chosen baseline on all data sets.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123665656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Illumination-normalized face recognition using Chromaticity Intrinsic Image 基于色度内禀图像的光照归一化人脸识别
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139050
Wuming Zhang, Xi Zhao, Di Huang, J. Morvan, Yunhong Wang, Liming Chen
Face recognition (FR) across illumination variations endeavors to alleviate the effect of illumination changes on human face, which remains a great challenge in reliable FR. Most prior studies focus on normalization of holistic lighting intensity while neglecting or simplifying the mechanism of image color formation. In contrast, we propose in this paper a novel approach for lighting robust FR through building the underlying reflectance model which characterizes the appearance of face surface. Specifically, the proposed illumination processing pipeline sheds light on interactions among face surface, lighting and camera, and enables generation of Chromaticity Intrinsic Image (CII) in a log space which is robust to illumination variations. Experimental results on CMU-PIE and ESRC face databases show the effectiveness of the proposed approach to deal with lighting variations in FR.
跨光照变化的人脸识别(FR)致力于减轻光照变化对人脸的影响,这是实现可靠的人脸识别的一大挑战。以往的研究大多侧重于整体光照强度的归一化,而忽略或简化了图像颜色形成的机制。相比之下,我们在本文中提出了一种新的方法,通过建立表征表面外观的底层反射率模型来照明健壮的FR。具体而言,所提出的光照处理管道揭示了人脸表面、光照和相机之间的相互作用,并能够在对数空间中生成对光照变化具有鲁棒性的色度内禀图像(CII)。在CMU-PIE和ESRC人脸数据库上的实验结果表明,该方法可以有效地处理FR中光照变化。
{"title":"Illumination-normalized face recognition using Chromaticity Intrinsic Image","authors":"Wuming Zhang, Xi Zhao, Di Huang, J. Morvan, Yunhong Wang, Liming Chen","doi":"10.1109/ICB.2015.7139050","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139050","url":null,"abstract":"Face recognition (FR) across illumination variations endeavors to alleviate the effect of illumination changes on human face, which remains a great challenge in reliable FR. Most prior studies focus on normalization of holistic lighting intensity while neglecting or simplifying the mechanism of image color formation. In contrast, we propose in this paper a novel approach for lighting robust FR through building the underlying reflectance model which characterizes the appearance of face surface. Specifically, the proposed illumination processing pipeline sheds light on interactions among face surface, lighting and camera, and enables generation of Chromaticity Intrinsic Image (CII) in a log space which is robust to illumination variations. Experimental results on CMU-PIE and ESRC face databases show the effectiveness of the proposed approach to deal with lighting variations in FR.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129436514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guidelines for best practices in biometrics research 生物计量学研究最佳实践指南
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139116
Anil K. Jain, Brendan Klare, A. Ross
Biometric recognition has undoubtedly made great strides over the past 50 years. To ensure that current academic research in biometrics has a positive impact on future technological developments, this paper documents some guidelines encouraging researchers to focus on high-impact problems, develop solutions that are practically viable, report results using sound experimental and evaluation protocols, and justify claims based on verifiable facts. The intent is to ensure that methods and results published in the literature have been properly evaluated and are practically feasible for automated or semi-automated human recognition. It is believed that following these guidelines will avoid inflated claims and support published research on a legitimate foundation that can be embraced by practitioners and peers in biometrics and related scientific disciplines (e.g, forensic science).
在过去的50年里,生物识别技术无疑取得了巨大的进步。为了确保当前的生物识别学术研究对未来的技术发展产生积极影响,本文记录了一些指导方针,鼓励研究人员关注高影响问题,开发实际可行的解决方案,使用合理的实验和评估协议报告结果,并基于可验证的事实证明主张。目的是确保在文献中发表的方法和结果已被适当评估,并且对于自动化或半自动人类识别实际上是可行的。据信,遵循这些准则将避免夸大的主张,并支持在合法基础上发表的研究,这些研究可以被生物识别学和相关科学学科(例如法医学)的从业者和同行所接受。
{"title":"Guidelines for best practices in biometrics research","authors":"Anil K. Jain, Brendan Klare, A. Ross","doi":"10.1109/ICB.2015.7139116","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139116","url":null,"abstract":"Biometric recognition has undoubtedly made great strides over the past 50 years. To ensure that current academic research in biometrics has a positive impact on future technological developments, this paper documents some guidelines encouraging researchers to focus on high-impact problems, develop solutions that are practically viable, report results using sound experimental and evaluation protocols, and justify claims based on verifiable facts. The intent is to ensure that methods and results published in the literature have been properly evaluated and are practically feasible for automated or semi-automated human recognition. It is believed that following these guidelines will avoid inflated claims and support published research on a legitimate foundation that can be embraced by practitioners and peers in biometrics and related scientific disciplines (e.g, forensic science).","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121014890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Superpixel based finger vein ROI extraction with sensor interoperability 基于传感器互操作性的超像素手指静脉ROI提取
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139108
Lu Yang, Gongping Yang, Lizhen Zhou, Yilong Yin
Finger vein is a new and promising trait in biometric recognition and some related progress have been achieved in recent years. Considering that there are many different sensors in a biometric system, sensor interoperability is a very important issue and still neglected in the state-of-the-art finger vein recognition. Based on the analysis of the shortcomings in the current finger vein ROI extraction methods, this paper proposes a new superpixel based finger vein ROI extraction method with sensor interoperability. First, finger boundaries are determined by tracking superpixels which are very robust to image variations such as gray level and background noises. Furthermore, to handle finger displacement, the middle points of the detected finger boundaries are used to adjust finger direction. Finally, finger ROI is localized by the internal tangents of finger boundaries. Experimental results show that the proposed method can extract the ROIs accurately and adaptively from images which are captured by different sensors.
手指静脉是生物特征识别中一个很有前途的新特征,近年来取得了一些进展。由于生物识别系统中有许多不同的传感器,传感器的互操作性是一个非常重要的问题,但在最先进的手指静脉识别中仍然被忽视。在分析现有手指静脉感兴趣点提取方法不足的基础上,提出了一种基于传感器互操作的超像素手指静脉感兴趣点提取方法。首先,手指边界是通过跟踪超像素来确定的,超像素对图像的灰度和背景噪声等变化非常敏感。此外,为了处理手指的位移,使用检测到的手指边界的中点来调整手指的方向。最后,利用手指边界的内切线对手指感兴趣区域进行定位。实验结果表明,该方法能够准确、自适应地从不同传感器采集的图像中提取roi。
{"title":"Superpixel based finger vein ROI extraction with sensor interoperability","authors":"Lu Yang, Gongping Yang, Lizhen Zhou, Yilong Yin","doi":"10.1109/ICB.2015.7139108","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139108","url":null,"abstract":"Finger vein is a new and promising trait in biometric recognition and some related progress have been achieved in recent years. Considering that there are many different sensors in a biometric system, sensor interoperability is a very important issue and still neglected in the state-of-the-art finger vein recognition. Based on the analysis of the shortcomings in the current finger vein ROI extraction methods, this paper proposes a new superpixel based finger vein ROI extraction method with sensor interoperability. First, finger boundaries are determined by tracking superpixels which are very robust to image variations such as gray level and background noises. Furthermore, to handle finger displacement, the middle points of the detected finger boundaries are used to adjust finger direction. Finally, finger ROI is localized by the internal tangents of finger boundaries. Experimental results show that the proposed method can extract the ROIs accurately and adaptively from images which are captured by different sensors.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121068539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Discriminative transfer learning for single-sample face recognition 基于判别迁移学习的单样本人脸识别
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139095
Junlin Hu, Jiwen Lu, Xiuzhuang Zhou, Yap-Peng Tan
Discriminant analysis is an important technique for face recognition because it can extract discriminative features to classify different persons. However, most existing discriminant analysis methods fail to work for single-sample face recognition (SSFR) because there is only a single training sample per person such that the within-class variation of this person cannot be estimated in such scenario. In this paper, we present a new discriminative transfer learning (DTL) approach for SSFR, where discriminant analysis is performed on a multiple-sample generic training set and then transferred into the single-sample gallery set. Specifically, our DTL learns a feature projection to minimize the intra-class variation and maximize the inter-class variation of samples in the training set, and minimize the difference between the generic training set and the gallery set, simultaneously. Experimental results on three face datasets including the FERET, CAS-PEAL-R1, and LFW datasets are presented to show the efficacy of our method.
判别分析是人脸识别的一项重要技术,它可以提取具有判别性的特征对不同的人进行分类。然而,大多数现有的判别分析方法都不能用于单样本人脸识别(SSFR),因为每个人只有一个训练样本,因此在这种情况下无法估计该人的类内变化。在本文中,我们提出了一种新的判别迁移学习(DTL)方法,该方法在多样本通用训练集上进行判别分析,然后将其转移到单样本库集。具体来说,我们的DTL学习了一个特征投影,以最小化训练集中样本的类内变化和最大化类间变化,同时最小化通用训练集和库集之间的差异。在FERET、cas - pel - r1和LFW三个人脸数据集上的实验结果表明了该方法的有效性。
{"title":"Discriminative transfer learning for single-sample face recognition","authors":"Junlin Hu, Jiwen Lu, Xiuzhuang Zhou, Yap-Peng Tan","doi":"10.1109/ICB.2015.7139095","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139095","url":null,"abstract":"Discriminant analysis is an important technique for face recognition because it can extract discriminative features to classify different persons. However, most existing discriminant analysis methods fail to work for single-sample face recognition (SSFR) because there is only a single training sample per person such that the within-class variation of this person cannot be estimated in such scenario. In this paper, we present a new discriminative transfer learning (DTL) approach for SSFR, where discriminant analysis is performed on a multiple-sample generic training set and then transferred into the single-sample gallery set. Specifically, our DTL learns a feature projection to minimize the intra-class variation and maximize the inter-class variation of samples in the training set, and minimize the difference between the generic training set and the gallery set, simultaneously. Experimental results on three face datasets including the FERET, CAS-PEAL-R1, and LFW datasets are presented to show the efficacy of our method.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127447819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
3D face analysis for demographic biometrics 用于人口统计生物识别的三维人脸分析
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139052
Ryan Tokola, A. Mikkilineni, Chris Boehnen
Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.
尽管越来越容易获取,3D数据很少用于人脸识别以外的生物识别应用。最近在基于图像的人口统计生物识别方面的工作取得了很大的成功,但是这些方法受到众所周知的2D表示的限制,特别是光照、纹理和姿势的变化,以及无法描述3D形状的基本缺陷。本文表明,在基于人脸的坐标系统中,简单的三维形状特征能够表示许多生物特征属性,而不需要特定问题的模型或专门的领域知识。同样的特征向量在年龄估计、性别分类和种族分类等不同的问题上取得了令人印象深刻的结果。
{"title":"3D face analysis for demographic biometrics","authors":"Ryan Tokola, A. Mikkilineni, Chris Boehnen","doi":"10.1109/ICB.2015.7139052","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139052","url":null,"abstract":"Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131984102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
An efficient approach for clustering face images 一种有效的人脸图像聚类方法
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139091
C. Otto, Brendan Klare, Anil K. Jain
Investigations that require the exploitation of large volumes of face imagery are increasingly common in current forensic scenarios (e.g., Boston Marathon bombing), but effective solutions for triaging such imagery (i.e., low importance, moderate importance, and of critical interest) are not available in the literature. General issues for investigators in these scenarios are a lack of systems that can scale to volumes of images of the order of a few million, and a lack of established methods for clustering the face images into the unknown number of persons of interest contained in the collection. As such, we explore best practices for clustering large sets of face images (up to 1 million here) into large numbers of clusters (approximately 200 thousand) as a method of reducing the volume of data to be investigated by forensic analysts. Our analysis involves a performance comparison of several clustering algorithms in terms of the accuracy of grouping face images by identity, run-time, and efficiency in representing large datasets of face images in terms of compact and isolated clusters. For two different face datasets, a mugshot database (PCSO) and the well known unconstrained dataset, LFW, we find the rank-order clustering method to be effective in clustering accuracy, and relatively efficient in terms of run-time.
在当前的法医场景(例如,波士顿马拉松爆炸案)中,需要利用大量面部图像的调查越来越普遍,但是在文献中没有有效的解决方案来分类这些图像(即低重要性,中等重要性和关键兴趣)。在这些情况下,调查人员面临的一般问题是缺乏可以扩展到数百万量级的图像量的系统,以及缺乏将面部图像聚类到集合中未知数量的感兴趣人员的既定方法。因此,我们探索了将大型面部图像集(这里多达100万)聚类到大量聚类(大约20万)中的最佳实践,作为减少法医分析师要调查的数据量的方法。我们的分析涉及几种聚类算法的性能比较,包括根据身份对人脸图像分组的准确性、运行时间和用紧凑和孤立的聚类表示大型人脸图像数据集的效率。对于两种不同的人脸数据集,即面部照片数据库(PCSO)和众所周知的无约束数据集LFW,我们发现秩序聚类方法在聚类精度上是有效的,在运行时间上是相对高效的。
{"title":"An efficient approach for clustering face images","authors":"C. Otto, Brendan Klare, Anil K. Jain","doi":"10.1109/ICB.2015.7139091","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139091","url":null,"abstract":"Investigations that require the exploitation of large volumes of face imagery are increasingly common in current forensic scenarios (e.g., Boston Marathon bombing), but effective solutions for triaging such imagery (i.e., low importance, moderate importance, and of critical interest) are not available in the literature. General issues for investigators in these scenarios are a lack of systems that can scale to volumes of images of the order of a few million, and a lack of established methods for clustering the face images into the unknown number of persons of interest contained in the collection. As such, we explore best practices for clustering large sets of face images (up to 1 million here) into large numbers of clusters (approximately 200 thousand) as a method of reducing the volume of data to be investigated by forensic analysts. Our analysis involves a performance comparison of several clustering algorithms in terms of the accuracy of grouping face images by identity, run-time, and efficiency in representing large datasets of face images in terms of compact and isolated clusters. For two different face datasets, a mugshot database (PCSO) and the well known unconstrained dataset, LFW, we find the rank-order clustering method to be effective in clustering accuracy, and relatively efficient in terms of run-time.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115326485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2015 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1