Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139089
J. Cheney, Benjamin Klein, Anil K. Jain, Brendan Klare
A large scale study of the accuracy and efficiency of face detection algorithms on unconstrained face imagery is presented. Nine different face detection algorithms are studied, which are acquired through either government rights, open source, or commercial licensing. The primary data set utilized for analysis is the IAPRA Janus Benchmark A (IJB-A), a recently released unconstrained face detection and recognition dataset which, at the time of this study, contained 67,183 manually localized faces in 5,712 images and 20,408 video frames. The goal of the study is to determine the state of the art in face detection with respect to unconstrained imagery which is motivated by the saturation of recognition accuracies on seminal unconstrained face recognition datasets which are filtered to only contain faces detectable by a commodity face detection algorithm. The most notable finding from this study is that top performing detectors still fail to detect the vast majority of faces with extreme pose, partial occlusion, and/or poor illumination. In total, over 20% of faces fail to be detected by all nine detectors studied. The speed of the detectors was generally correlated with accuracy: faster detectors were less accurate than their slower counterparts. Finally, key considerations and guidance is provided for performing face detection evaluations. All software using these methods to conduct the evaluations and plot the accuracies are made available in the open source.
对无约束人脸图像下人脸检测算法的精度和效率进行了大规模的研究。研究了九种不同的人脸检测算法,这些算法通过政府权利、开源或商业许可获得。用于分析的主要数据集是IAPRA Janus Benchmark A (IJB-A),这是一个最近发布的无约束人脸检测和识别数据集,在本研究时,该数据集包含5712张图像和20408个视频帧中的67183张手动定位的人脸。本研究的目的是确定关于无约束图像的人脸检测技术的现状,这是由开创性的无约束人脸识别数据集的识别精度饱和所驱动的,这些数据集被过滤为仅包含可被商品人脸检测算法检测到的人脸。这项研究最值得注意的发现是,表现最好的检测器仍然无法检测到绝大多数极端姿势、部分遮挡和/或光照不足的人脸。总的来说,超过20%的人脸没有被研究的所有9个检测器检测到。探测器的速度通常与精度相关:速度快的探测器比速度慢的探测器精度低。最后,为进行人脸检测评估提供了关键的考虑因素和指导。所有使用这些方法进行评估和绘制精度的软件都可以在开放源码中获得。
{"title":"Unconstrained face detection: State of the art baseline and challenges","authors":"J. Cheney, Benjamin Klein, Anil K. Jain, Brendan Klare","doi":"10.1109/ICB.2015.7139089","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139089","url":null,"abstract":"A large scale study of the accuracy and efficiency of face detection algorithms on unconstrained face imagery is presented. Nine different face detection algorithms are studied, which are acquired through either government rights, open source, or commercial licensing. The primary data set utilized for analysis is the IAPRA Janus Benchmark A (IJB-A), a recently released unconstrained face detection and recognition dataset which, at the time of this study, contained 67,183 manually localized faces in 5,712 images and 20,408 video frames. The goal of the study is to determine the state of the art in face detection with respect to unconstrained imagery which is motivated by the saturation of recognition accuracies on seminal unconstrained face recognition datasets which are filtered to only contain faces detectable by a commodity face detection algorithm. The most notable finding from this study is that top performing detectors still fail to detect the vast majority of faces with extreme pose, partial occlusion, and/or poor illumination. In total, over 20% of faces fail to be detected by all nine detectors studied. The speed of the detectors was generally correlated with accuracy: faster detectors were less accurate than their slower counterparts. Finally, key considerations and guidance is provided for performing face detection evaluations. All software using these methods to conduct the evaluations and plot the accuracies are made available in the open source.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125319763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139042
E. G. Llano, J. Colores-Vargas, M. García-Vázquez, L. M. Zamudio-Fuentes, A. A. Ramírez-Acosta
Currently, identity management systems work with heterogeneous iris images captured by different types of iris sensors. Indeed, iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain a stable iris recognition system which is effective for all type of iris sensors. This paper proposes a new cross-sensor iris recognition scheme that increases the recognition accuracy. The novelty of this work is the new strategy in applying robust fusion methods at level of segmentation stage for cross-sensor iris recognition. The experiments with the Casia-V3-Interval, Casia-V4-Thousand, Ubiris-V1 and MBGC-V2 databases show that our scheme increases the recognition accuracy and it is robust to different types of iris sensors while the user interaction is reduced.
{"title":"Cross-sensor iris verification applying robust fused segmentation algorithms","authors":"E. G. Llano, J. Colores-Vargas, M. García-Vázquez, L. M. Zamudio-Fuentes, A. A. Ramírez-Acosta","doi":"10.1109/ICB.2015.7139042","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139042","url":null,"abstract":"Currently, identity management systems work with heterogeneous iris images captured by different types of iris sensors. Indeed, iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain a stable iris recognition system which is effective for all type of iris sensors. This paper proposes a new cross-sensor iris recognition scheme that increases the recognition accuracy. The novelty of this work is the new strategy in applying robust fusion methods at level of segmentation stage for cross-sensor iris recognition. The experiments with the Casia-V3-Interval, Casia-V4-Thousand, Ubiris-V1 and MBGC-V2 databases show that our scheme increases the recognition accuracy and it is robust to different types of iris sensors while the user interaction is reduced.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114781785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139072
Peter Wild, P. Radu, J. Ferryman
Multispectral iris recognition uses information from multiple bands of the electromagnetic spectrum to better represent certain physiological characteristics of the iris texture and enhance obtained recognition accuracy. This paper addresses the questions of single versus cross-spectral performance and compares score-level fusion accuracy for different feature types, combining different wavelengths to overcome limitations in less constrained recording environments. Further it is investigated whether Doddington's “goats” (users who are particularly difficult to recognize) in one spectrum also extend to other spectra. Focusing on the question of feature stability at different wavelengths, this work uses manual ground truth segmentation, avoiding bias by segmentation impact. Experiments on the public UTIRIS multispectral iris dataset using 4 feature extraction techniques reveal a significant enhancement when combining NIR + Red for 2-channel and NIR + Red + Blue for 3-channel fusion, across different feature types. Selective feature-level fusion is investigated and shown to improve overall and especially cross-spectral performance without increasing the overall length of the iris code.
多光谱虹膜识别利用电磁波谱的多个波段信息,更好地表征虹膜纹理的某些生理特征,提高识别精度。本文解决了单光谱与交叉光谱性能的问题,并比较了不同特征类型的分数级融合精度,结合不同的波长来克服约束较少的记录环境中的局限性。进一步研究了Doddington在一个光谱中的“山羊”(特别难以识别的用户)是否也延伸到其他光谱。针对不同波长下的特征稳定性问题,本文采用人工地真值分割,避免了分割影响带来的偏差。在公共UTIRIS多光谱虹膜数据集上使用4种特征提取技术进行的实验表明,在不同的特征类型中,结合NIR + Red进行2通道融合和NIR + Red + Blue进行3通道融合时,具有显著的增强效果。选择性特征级融合在不增加虹膜编码总长度的情况下提高了整体性能,特别是跨光谱性能。
{"title":"On fusion for multispectral iris recognition","authors":"Peter Wild, P. Radu, J. Ferryman","doi":"10.1109/ICB.2015.7139072","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139072","url":null,"abstract":"Multispectral iris recognition uses information from multiple bands of the electromagnetic spectrum to better represent certain physiological characteristics of the iris texture and enhance obtained recognition accuracy. This paper addresses the questions of single versus cross-spectral performance and compares score-level fusion accuracy for different feature types, combining different wavelengths to overcome limitations in less constrained recording environments. Further it is investigated whether Doddington's “goats” (users who are particularly difficult to recognize) in one spectrum also extend to other spectra. Focusing on the question of feature stability at different wavelengths, this work uses manual ground truth segmentation, avoiding bias by segmentation impact. Experiments on the public UTIRIS multispectral iris dataset using 4 feature extraction techniques reveal a significant enhancement when combining NIR + Red for 2-channel and NIR + Red + Blue for 3-channel fusion, across different feature types. Selective feature-level fusion is investigated and shown to improve overall and especially cross-spectral performance without increasing the overall length of the iris code.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122142976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139088
G. Prado, H. Pedrini, W. R. Schwartz
In the surveillance field, it is very common to have camera networks covering large crowded areas. Not rarely, cameras in these networks do not share the same field of view and they are not always calibrated. In these cases, common problems such as tracking cannot be directly applied as the information from one camera must be also consistent with the others. This is the most common scenario for the person re-identification problem, where there is the need to detect, track and keep a consistent identification of people across a network of cameras. Many approaches have been developed to solve this problem in different manners. However, person re-identification is still an open problem due to many challenges required to be addressed to build a robust system. To tackle the re-identification problem and improve the accuracy, we propose a novel approach based on Partial Least Squares signatures, which is based on the visual appearance of people. We demonstrate the method performance with experiments conducted on three public available data sets. Results show that our method overcome the chosen baseline on all data sets.
{"title":"A verify-correct approach to person re-identification based on Partial Least Squares signatures","authors":"G. Prado, H. Pedrini, W. R. Schwartz","doi":"10.1109/ICB.2015.7139088","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139088","url":null,"abstract":"In the surveillance field, it is very common to have camera networks covering large crowded areas. Not rarely, cameras in these networks do not share the same field of view and they are not always calibrated. In these cases, common problems such as tracking cannot be directly applied as the information from one camera must be also consistent with the others. This is the most common scenario for the person re-identification problem, where there is the need to detect, track and keep a consistent identification of people across a network of cameras. Many approaches have been developed to solve this problem in different manners. However, person re-identification is still an open problem due to many challenges required to be addressed to build a robust system. To tackle the re-identification problem and improve the accuracy, we propose a novel approach based on Partial Least Squares signatures, which is based on the visual appearance of people. We demonstrate the method performance with experiments conducted on three public available data sets. Results show that our method overcome the chosen baseline on all data sets.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123665656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139050
Wuming Zhang, Xi Zhao, Di Huang, J. Morvan, Yunhong Wang, Liming Chen
Face recognition (FR) across illumination variations endeavors to alleviate the effect of illumination changes on human face, which remains a great challenge in reliable FR. Most prior studies focus on normalization of holistic lighting intensity while neglecting or simplifying the mechanism of image color formation. In contrast, we propose in this paper a novel approach for lighting robust FR through building the underlying reflectance model which characterizes the appearance of face surface. Specifically, the proposed illumination processing pipeline sheds light on interactions among face surface, lighting and camera, and enables generation of Chromaticity Intrinsic Image (CII) in a log space which is robust to illumination variations. Experimental results on CMU-PIE and ESRC face databases show the effectiveness of the proposed approach to deal with lighting variations in FR.
{"title":"Illumination-normalized face recognition using Chromaticity Intrinsic Image","authors":"Wuming Zhang, Xi Zhao, Di Huang, J. Morvan, Yunhong Wang, Liming Chen","doi":"10.1109/ICB.2015.7139050","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139050","url":null,"abstract":"Face recognition (FR) across illumination variations endeavors to alleviate the effect of illumination changes on human face, which remains a great challenge in reliable FR. Most prior studies focus on normalization of holistic lighting intensity while neglecting or simplifying the mechanism of image color formation. In contrast, we propose in this paper a novel approach for lighting robust FR through building the underlying reflectance model which characterizes the appearance of face surface. Specifically, the proposed illumination processing pipeline sheds light on interactions among face surface, lighting and camera, and enables generation of Chromaticity Intrinsic Image (CII) in a log space which is robust to illumination variations. Experimental results on CMU-PIE and ESRC face databases show the effectiveness of the proposed approach to deal with lighting variations in FR.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129436514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139116
Anil K. Jain, Brendan Klare, A. Ross
Biometric recognition has undoubtedly made great strides over the past 50 years. To ensure that current academic research in biometrics has a positive impact on future technological developments, this paper documents some guidelines encouraging researchers to focus on high-impact problems, develop solutions that are practically viable, report results using sound experimental and evaluation protocols, and justify claims based on verifiable facts. The intent is to ensure that methods and results published in the literature have been properly evaluated and are practically feasible for automated or semi-automated human recognition. It is believed that following these guidelines will avoid inflated claims and support published research on a legitimate foundation that can be embraced by practitioners and peers in biometrics and related scientific disciplines (e.g, forensic science).
{"title":"Guidelines for best practices in biometrics research","authors":"Anil K. Jain, Brendan Klare, A. Ross","doi":"10.1109/ICB.2015.7139116","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139116","url":null,"abstract":"Biometric recognition has undoubtedly made great strides over the past 50 years. To ensure that current academic research in biometrics has a positive impact on future technological developments, this paper documents some guidelines encouraging researchers to focus on high-impact problems, develop solutions that are practically viable, report results using sound experimental and evaluation protocols, and justify claims based on verifiable facts. The intent is to ensure that methods and results published in the literature have been properly evaluated and are practically feasible for automated or semi-automated human recognition. It is believed that following these guidelines will avoid inflated claims and support published research on a legitimate foundation that can be embraced by practitioners and peers in biometrics and related scientific disciplines (e.g, forensic science).","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121014890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139108
Lu Yang, Gongping Yang, Lizhen Zhou, Yilong Yin
Finger vein is a new and promising trait in biometric recognition and some related progress have been achieved in recent years. Considering that there are many different sensors in a biometric system, sensor interoperability is a very important issue and still neglected in the state-of-the-art finger vein recognition. Based on the analysis of the shortcomings in the current finger vein ROI extraction methods, this paper proposes a new superpixel based finger vein ROI extraction method with sensor interoperability. First, finger boundaries are determined by tracking superpixels which are very robust to image variations such as gray level and background noises. Furthermore, to handle finger displacement, the middle points of the detected finger boundaries are used to adjust finger direction. Finally, finger ROI is localized by the internal tangents of finger boundaries. Experimental results show that the proposed method can extract the ROIs accurately and adaptively from images which are captured by different sensors.
{"title":"Superpixel based finger vein ROI extraction with sensor interoperability","authors":"Lu Yang, Gongping Yang, Lizhen Zhou, Yilong Yin","doi":"10.1109/ICB.2015.7139108","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139108","url":null,"abstract":"Finger vein is a new and promising trait in biometric recognition and some related progress have been achieved in recent years. Considering that there are many different sensors in a biometric system, sensor interoperability is a very important issue and still neglected in the state-of-the-art finger vein recognition. Based on the analysis of the shortcomings in the current finger vein ROI extraction methods, this paper proposes a new superpixel based finger vein ROI extraction method with sensor interoperability. First, finger boundaries are determined by tracking superpixels which are very robust to image variations such as gray level and background noises. Furthermore, to handle finger displacement, the middle points of the detected finger boundaries are used to adjust finger direction. Finally, finger ROI is localized by the internal tangents of finger boundaries. Experimental results show that the proposed method can extract the ROIs accurately and adaptively from images which are captured by different sensors.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121068539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139095
Junlin Hu, Jiwen Lu, Xiuzhuang Zhou, Yap-Peng Tan
Discriminant analysis is an important technique for face recognition because it can extract discriminative features to classify different persons. However, most existing discriminant analysis methods fail to work for single-sample face recognition (SSFR) because there is only a single training sample per person such that the within-class variation of this person cannot be estimated in such scenario. In this paper, we present a new discriminative transfer learning (DTL) approach for SSFR, where discriminant analysis is performed on a multiple-sample generic training set and then transferred into the single-sample gallery set. Specifically, our DTL learns a feature projection to minimize the intra-class variation and maximize the inter-class variation of samples in the training set, and minimize the difference between the generic training set and the gallery set, simultaneously. Experimental results on three face datasets including the FERET, CAS-PEAL-R1, and LFW datasets are presented to show the efficacy of our method.
判别分析是人脸识别的一项重要技术,它可以提取具有判别性的特征对不同的人进行分类。然而,大多数现有的判别分析方法都不能用于单样本人脸识别(SSFR),因为每个人只有一个训练样本,因此在这种情况下无法估计该人的类内变化。在本文中,我们提出了一种新的判别迁移学习(DTL)方法,该方法在多样本通用训练集上进行判别分析,然后将其转移到单样本库集。具体来说,我们的DTL学习了一个特征投影,以最小化训练集中样本的类内变化和最大化类间变化,同时最小化通用训练集和库集之间的差异。在FERET、cas - pel - r1和LFW三个人脸数据集上的实验结果表明了该方法的有效性。
{"title":"Discriminative transfer learning for single-sample face recognition","authors":"Junlin Hu, Jiwen Lu, Xiuzhuang Zhou, Yap-Peng Tan","doi":"10.1109/ICB.2015.7139095","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139095","url":null,"abstract":"Discriminant analysis is an important technique for face recognition because it can extract discriminative features to classify different persons. However, most existing discriminant analysis methods fail to work for single-sample face recognition (SSFR) because there is only a single training sample per person such that the within-class variation of this person cannot be estimated in such scenario. In this paper, we present a new discriminative transfer learning (DTL) approach for SSFR, where discriminant analysis is performed on a multiple-sample generic training set and then transferred into the single-sample gallery set. Specifically, our DTL learns a feature projection to minimize the intra-class variation and maximize the inter-class variation of samples in the training set, and minimize the difference between the generic training set and the gallery set, simultaneously. Experimental results on three face datasets including the FERET, CAS-PEAL-R1, and LFW datasets are presented to show the efficacy of our method.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127447819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139052
Ryan Tokola, A. Mikkilineni, Chris Boehnen
Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.
{"title":"3D face analysis for demographic biometrics","authors":"Ryan Tokola, A. Mikkilineni, Chris Boehnen","doi":"10.1109/ICB.2015.7139052","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139052","url":null,"abstract":"Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131984102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139091
C. Otto, Brendan Klare, Anil K. Jain
Investigations that require the exploitation of large volumes of face imagery are increasingly common in current forensic scenarios (e.g., Boston Marathon bombing), but effective solutions for triaging such imagery (i.e., low importance, moderate importance, and of critical interest) are not available in the literature. General issues for investigators in these scenarios are a lack of systems that can scale to volumes of images of the order of a few million, and a lack of established methods for clustering the face images into the unknown number of persons of interest contained in the collection. As such, we explore best practices for clustering large sets of face images (up to 1 million here) into large numbers of clusters (approximately 200 thousand) as a method of reducing the volume of data to be investigated by forensic analysts. Our analysis involves a performance comparison of several clustering algorithms in terms of the accuracy of grouping face images by identity, run-time, and efficiency in representing large datasets of face images in terms of compact and isolated clusters. For two different face datasets, a mugshot database (PCSO) and the well known unconstrained dataset, LFW, we find the rank-order clustering method to be effective in clustering accuracy, and relatively efficient in terms of run-time.
{"title":"An efficient approach for clustering face images","authors":"C. Otto, Brendan Klare, Anil K. Jain","doi":"10.1109/ICB.2015.7139091","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139091","url":null,"abstract":"Investigations that require the exploitation of large volumes of face imagery are increasingly common in current forensic scenarios (e.g., Boston Marathon bombing), but effective solutions for triaging such imagery (i.e., low importance, moderate importance, and of critical interest) are not available in the literature. General issues for investigators in these scenarios are a lack of systems that can scale to volumes of images of the order of a few million, and a lack of established methods for clustering the face images into the unknown number of persons of interest contained in the collection. As such, we explore best practices for clustering large sets of face images (up to 1 million here) into large numbers of clusters (approximately 200 thousand) as a method of reducing the volume of data to be investigated by forensic analysts. Our analysis involves a performance comparison of several clustering algorithms in terms of the accuracy of grouping face images by identity, run-time, and efficiency in representing large datasets of face images in terms of compact and isolated clusters. For two different face datasets, a mugshot database (PCSO) and the well known unconstrained dataset, LFW, we find the rank-order clustering method to be effective in clustering accuracy, and relatively efficient in terms of run-time.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115326485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}