首页 > 最新文献

2015 International Conference on Biometrics (ICB)最新文献

英文 中文
Multi-label CNN based pedestrian attribute learning for soft biometrics 基于多标签CNN的软生物特征行人属性学习
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139070
Jianqing Zhu, Shengcai Liao, Dong Yi, Zhen Lei, S. Li
Recently, pedestrian attributes like gender, age and clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Moreover, we propose an attribute assisted person re-identification method, which fuses attribute distances and low-level feature distances between pairs of person images to improve person re-identification performance. Extensive experiments show: 1) the average attribute classification accuracy of the proposed method is 5.2% and 9.3% higher than the SVM-based method on three public databases, VIPeR and GRID, respectively; 2) the proposed attribute assisted person re-identification method is superior to existing approaches.
近年来,行人的性别、年龄、衣着等属性被作为识别人的软生物特征。与现有方法在预测过程中假设属性的独立性不同,我们提出了一种多标签卷积神经网络(MLCNN)来在统一的框架中预测多个属性。首先,将行人图像大致划分为多个重叠的身体部分,并在多标签卷积神经网络中同时进行整合;其次,对这些部分进行独立过滤,并在成本层进行聚合。代价函数是多个二元属性分类代价函数的组合。此外,我们提出了一种属性辅助的人物再识别方法,该方法融合了人物图像对之间的属性距离和底层特征距离,提高了人物再识别的性能。大量实验表明:1)在VIPeR和GRID三个公共数据库上,本文方法的平均属性分类准确率分别比基于支持向量机的方法高5.2%和9.3%;2)本文提出的属性辅助人再识别方法优于现有方法。
{"title":"Multi-label CNN based pedestrian attribute learning for soft biometrics","authors":"Jianqing Zhu, Shengcai Liao, Dong Yi, Zhen Lei, S. Li","doi":"10.1109/ICB.2015.7139070","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139070","url":null,"abstract":"Recently, pedestrian attributes like gender, age and clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Moreover, we propose an attribute assisted person re-identification method, which fuses attribute distances and low-level feature distances between pairs of person images to improve person re-identification performance. Extensive experiments show: 1) the average attribute classification accuracy of the proposed method is 5.2% and 9.3% higher than the SVM-based method on three public databases, VIPeR and GRID, respectively; 2) the proposed attribute assisted person re-identification method is superior to existing approaches.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120953405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 121
Live face video vs. spoof face video: Use of moiré patterns to detect replay video attacks 实时人脸视频vs.欺骗人脸视频:使用监控模式来检测重放视频攻击
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139082
Keyurkumar Patel, Hu Han, Anil K. Jain, Greg Ott
With the wide deployment of face recognition systems in applications from border control to mobile device unlocking, the combat of face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays and 3D masks. We address the problem of facial spoofing detection against replay attacks based on the analysis of aliasing in spoof face videos. The application domain of interest is mobile phone unlock. We analyze the moiré pattern aliasing that commonly appears during the recapture of video or photo replays on a screen in different channels (R, G, B and grayscale) and regions (the whole frame, detected face, and facial component between the nose and chin). Multi-scale LBP and DSIFT features are used to represent the characteristics of moiré patterns that differentiate a replayed spoof face from a live face (face present). Experimental results on Idiap replay-attack and CASIA databases as well as a database collected in our laboratory (RAFS), which is based on the MSU-FSD database, shows that the proposed approach is very effective in face spoof detection for both cross-database, and intra-database testing scenarios.
随着人脸识别系统在从边境管制到移动设备解锁等应用中的广泛应用,人脸欺骗攻击的打击需要越来越多的关注;这种攻击可以通过打印照片、视频回放和3D面具轻松发动。我们在分析人脸欺骗视频混叠的基础上,解决了人脸欺骗检测对重放攻击的问题。感兴趣的应用领域是手机解锁。我们分析了在屏幕上不同通道(R、G、B和灰度)和区域(整个帧、检测到的人脸以及鼻子和下巴之间的面部成分)重拍视频或照片时常见的moir模式混叠。使用多尺度LBP和DSIFT特征来表示将重放的恶搞人脸与真实人脸(人脸在场)区分开来的摩尔模式特征。在Idiap重放攻击和CASIA数据库以及我们实验室收集的基于MSU-FSD数据库的数据库(RAFS)上的实验结果表明,该方法在跨数据库和数据库内测试场景下都是非常有效的人脸欺骗检测方法。
{"title":"Live face video vs. spoof face video: Use of moiré patterns to detect replay video attacks","authors":"Keyurkumar Patel, Hu Han, Anil K. Jain, Greg Ott","doi":"10.1109/ICB.2015.7139082","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139082","url":null,"abstract":"With the wide deployment of face recognition systems in applications from border control to mobile device unlocking, the combat of face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays and 3D masks. We address the problem of facial spoofing detection against replay attacks based on the analysis of aliasing in spoof face videos. The application domain of interest is mobile phone unlock. We analyze the moiré pattern aliasing that commonly appears during the recapture of video or photo replays on a screen in different channels (R, G, B and grayscale) and regions (the whole frame, detected face, and facial component between the nose and chin). Multi-scale LBP and DSIFT features are used to represent the characteristics of moiré patterns that differentiate a replayed spoof face from a live face (face present). Experimental results on Idiap replay-attack and CASIA databases as well as a database collected in our laboratory (RAFS), which is based on the MSU-FSD database, shows that the proposed approach is very effective in face spoof detection for both cross-database, and intra-database testing scenarios.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121346906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
Sensor ageing impact on finger-vein recognition 传感器老化对手指静脉识别的影响
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139084
Christof Kauba, A. Uhl
The impact of sensor ageing related pixel defects on the performance of finger vein based recognition systems in terms of the EER (Equal Error Rate) is investigated. Therefore the defect growth rate per year for the sensor used to capture the data set was estimated. Based on this estimation an experimental study using several simulations with increasing numbers of stuck and hot pixels were done to determine the impact on different finger-vein matching schemes. Whereas for a reasonable number of pixel defects none of the methods is considerably influenced, the performance of several schemes drops if the number of defects is increased. The impact can be reduced using a simple denoising filter.
从等效错误率的角度研究了传感器老化相关的像素缺陷对基于手指静脉的识别系统性能的影响。因此,用于捕获数据集的传感器每年的缺陷增长率被估计出来。在此基础上,通过增加卡像素和热像素的模拟实验研究了不同的手指静脉匹配方案对指纹静脉匹配的影响。然而,对于合理数量的像素缺陷,这些方法都不会受到很大的影响,当缺陷数量增加时,几种方法的性能都会下降。使用简单的去噪滤波器可以降低影响。
{"title":"Sensor ageing impact on finger-vein recognition","authors":"Christof Kauba, A. Uhl","doi":"10.1109/ICB.2015.7139084","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139084","url":null,"abstract":"The impact of sensor ageing related pixel defects on the performance of finger vein based recognition systems in terms of the EER (Equal Error Rate) is investigated. Therefore the defect growth rate per year for the sensor used to capture the data set was estimated. Based on this estimation an experimental study using several simulations with increasing numbers of stuck and hot pixels were done to determine the impact on different finger-vein matching schemes. Whereas for a reasonable number of pixel defects none of the methods is considerably influenced, the performance of several schemes drops if the number of defects is increased. The impact can be reduced using a simple denoising filter.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121393443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Gait regeneration for recognition 步态识别再生
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139048
D. Muramatsu, Yasushi Makihara, Y. Yagi
Gait recognition has potential to recognize subject in CCTV footages thanks to robustness against image resolution. In the CCTV footage, several body-regions of subjects are, however, often un-observable because of occlusions and/or cutting off caused by limited field of view, and therefore, recognition must be done from a pair of partially observed data. The most popular approach to recognition from partially observed data is matching the data from common observable region. This approach, however, cannot be applied in the cases where the matching pair has no common observable region. We therefore, propose an approach to enable recognition even from the pair with no common observable region. In the proposed approach, we reconstruct entire gait feature from a partial gait feature extracted from the observable region using a subspace-based method, and match the reconstructed entire gait features for recognition. We evaluate the proposed approach against two different datasets. In the best case, the proposed approach achieves recognition accuracy with EER of 16.2% from such a pair.
由于步态识别对图像分辨率的鲁棒性,它有可能在CCTV视频中识别目标。然而,在闭路电视录像中,由于视野有限造成的遮挡和/或切断,受试者的几个身体区域往往无法观察到,因此,必须从一对部分观察到的数据中进行识别。对部分观测数据进行识别,最常用的方法是对共同观测区域的数据进行匹配。然而,这种方法不能应用于匹配对没有共同可观察区域的情况。因此,我们提出了一种方法,即使在没有共同可观察区域的情况下也能进行识别。在该方法中,我们使用基于子空间的方法从可观察区域提取部分步态特征来重建整个步态特征,并对重建的整个步态特征进行匹配以进行识别。我们针对两个不同的数据集评估了所提出的方法。在最佳情况下,该方法的识别准确率为16.2%。
{"title":"Gait regeneration for recognition","authors":"D. Muramatsu, Yasushi Makihara, Y. Yagi","doi":"10.1109/ICB.2015.7139048","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139048","url":null,"abstract":"Gait recognition has potential to recognize subject in CCTV footages thanks to robustness against image resolution. In the CCTV footage, several body-regions of subjects are, however, often un-observable because of occlusions and/or cutting off caused by limited field of view, and therefore, recognition must be done from a pair of partially observed data. The most popular approach to recognition from partially observed data is matching the data from common observable region. This approach, however, cannot be applied in the cases where the matching pair has no common observable region. We therefore, propose an approach to enable recognition even from the pair with no common observable region. In the proposed approach, we reconstruct entire gait feature from a partial gait feature extracted from the observable region using a subspace-based method, and match the reconstructed entire gait features for recognition. We evaluate the proposed approach against two different datasets. In the best case, the proposed approach achieves recognition accuracy with EER of 16.2% from such a pair.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121089900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Fine-grained face verification: Dataset and baseline results 细粒度人脸验证:数据集和基线结果
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139079
Junlin Hu, Jiwen Lu, Yap-Peng Tan
This paper investigates the problem of fine-grained face verification under unconstrained conditions. For the conventional face verification task, the verification model is trained with some positive and negative face pairs, where each positive sample pair contains two face images of the same person while each negative sample pair usually consists of two face images from different subjects. However, in many real applications, facial appearance of the twins looks very similar even if they are considered as a negative pair in face verification. Therefore, it is important to differentiate a given face pair to determine whether it is from the same person or a twins for a practical face verification system because most existing face verification systems fails to work well in such a scenario. In this work, we define the problem as fine-grained face verification and collect an unconstrained face dataset which contains 455 pairs of identical twins to generate negative face pairs to evaluate several baseline verification models for fine-grained unconstrained face verification. Benchmark results on the unsupervised setting and restricted setting show the challenge of the fine-grained face verification in the wild.
研究了无约束条件下的细粒度人脸验证问题。对于传统的人脸验证任务,验证模型是用一些正面和负面的人脸对来训练的,其中每一个正面的样本对包含同一个人的两张人脸图像,而每一个负面的样本对通常包含来自不同主体的两张人脸图像。然而,在许多实际应用中,即使在面部验证中被认为是阴性的一对,双胞胎的面部外观看起来也非常相似。因此,对于实际的人脸验证系统来说,区分给定的人脸对以确定它是来自同一个人还是双胞胎是很重要的,因为大多数现有的人脸验证系统在这种情况下都不能很好地工作。在这项工作中,我们将问题定义为细粒度人脸验证,并收集包含455对同卵双胞胎的无约束人脸数据集来生成负人脸对,以评估几种用于细粒度无约束人脸验证的基线验证模型。在无监督设置和限制设置下的基准测试结果表明,在野外进行细粒度人脸验证是一项挑战。
{"title":"Fine-grained face verification: Dataset and baseline results","authors":"Junlin Hu, Jiwen Lu, Yap-Peng Tan","doi":"10.1109/ICB.2015.7139079","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139079","url":null,"abstract":"This paper investigates the problem of fine-grained face verification under unconstrained conditions. For the conventional face verification task, the verification model is trained with some positive and negative face pairs, where each positive sample pair contains two face images of the same person while each negative sample pair usually consists of two face images from different subjects. However, in many real applications, facial appearance of the twins looks very similar even if they are considered as a negative pair in face verification. Therefore, it is important to differentiate a given face pair to determine whether it is from the same person or a twins for a practical face verification system because most existing face verification systems fails to work well in such a scenario. In this work, we define the problem as fine-grained face verification and collect an unconstrained face dataset which contains 455 pairs of identical twins to generate negative face pairs to evaluate several baseline verification models for fine-grained unconstrained face verification. Benchmark results on the unsupervised setting and restricted setting show the challenge of the fine-grained face verification in the wild.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122547733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Example-based 3D face reconstruction from uncalibrated frontal and profile images 基于实例的基于未校准正面和侧面图像的3D人脸重建
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139051
Jing Li, Shuqin Long, Dan Zeng, Qijun Zhao
Reconstructing 3D face models from multiple uncalibrated 2D face images is usually done by using a single reference 3D face model or some gender/ethnicity-specific 3D face models. However, different persons, even those of the same gender or ethnicity, usually have significantly different faces in terms of their overall appearance, which forms the base of person recognition using faces. Consequently, existing 3D reference model based methods have limited capability of reconstructing 3D face models for a large variety of persons. In this paper, we propose to explore a reservoir of diverse reference models to improve the 3D face reconstruction performance. Specifically, we convert the face reconstruction problem into a multi-label segmentation problem. Its energy function is formulated from different cues, including 1) similarity between the desired output and the initial model, 2) color consistency between different views, 3) smoothness constraint on adjacent pixels, and 4) model consistency within local neighborhood. Experimental results on challenging datasets demonstrate that the proposed algorithm is capable of recovering high quality face models in both qualitative and quantitative evaluations.
从多个未校准的2D人脸图像重建3D人脸模型通常通过使用单个参考3D人脸模型或一些性别/种族特定的3D人脸模型来完成。然而,不同的人,即使是同一性别或种族的人,通常在整体外观上也有显著的不同,这就形成了人脸识别的基础。因此,现有的基于三维参考模型的方法对大量人群的三维人脸模型重建能力有限。在这篇论文中,我们提出探索一个多样化的参考模型库来提高三维人脸重建的性能。具体来说,我们将人脸重建问题转化为多标签分割问题。它的能量函数由不同的线索组成,包括1)期望输出与初始模型之间的相似性,2)不同视图之间的颜色一致性,3)相邻像素的平滑约束,以及4)局部邻域内的模型一致性。在具有挑战性的数据集上的实验结果表明,该算法能够在定性和定量评估中恢复高质量的人脸模型。
{"title":"Example-based 3D face reconstruction from uncalibrated frontal and profile images","authors":"Jing Li, Shuqin Long, Dan Zeng, Qijun Zhao","doi":"10.1109/ICB.2015.7139051","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139051","url":null,"abstract":"Reconstructing 3D face models from multiple uncalibrated 2D face images is usually done by using a single reference 3D face model or some gender/ethnicity-specific 3D face models. However, different persons, even those of the same gender or ethnicity, usually have significantly different faces in terms of their overall appearance, which forms the base of person recognition using faces. Consequently, existing 3D reference model based methods have limited capability of reconstructing 3D face models for a large variety of persons. In this paper, we propose to explore a reservoir of diverse reference models to improve the 3D face reconstruction performance. Specifically, we convert the face reconstruction problem into a multi-label segmentation problem. Its energy function is formulated from different cues, including 1) similarity between the desired output and the initial model, 2) color consistency between different views, 3) smoothness constraint on adjacent pixels, and 4) model consistency within local neighborhood. Experimental results on challenging datasets demonstrate that the proposed algorithm is capable of recovering high quality face models in both qualitative and quantitative evaluations.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128562594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Face retriever: Pre-filtering the gallery via deep neural net 人脸检索:通过深度神经网络对图库进行预过滤
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139112
Dayong Wang, Anil K. Jain
Face retrieval is an enabling technology for many applications, including automatic face annotation, deduplication, and surveillance. In this paper, we propose a face retrieval system which combines a k-NN search procedure with a COTS matcher (PittPatt1) in a cascaded manner. In particular, given a query face, we first pre-filter the gallery set and find the top-k most similar faces for the query image by using deep facial features that are learned with a deep convolutional neural network. The top-k most similar faces are then re-ranked based on score-level fusion of the similarities between deep features and the COTS matcher. To further boost the retrieval performance, we develop a manifold ranking algorithm. The proposed face retrieval system is evaluated on two large-scale face image databases: (i) a web face image database, which consists of over 3, 880 query images of 1, 507 subjects and a gallery of 5, 000, 000 faces, and (ii) a mugshot database, which consists of 1, 000 query images of 1, 000 subjects and a gallery of 1, 000, 000 faces. Experimental results demonstrate that the proposed face retrieval system can simultaneously improve the retrieval performance (CMC and precision-recall) and scalability for large-scale face retrieval problems.
人脸检索是许多应用程序的支持技术,包括自动人脸注释、重复数据删除和监视。在本文中,我们提出了一种将k-NN搜索过程与COTS匹配器(PittPatt1)以级联方式相结合的人脸检索系统。特别是,给定一张查询脸,我们首先对图库集进行预过滤,并通过使用深度卷积神经网络学习的深度面部特征,为查询图像找到top-k最相似的脸。然后基于深度特征和COTS匹配器之间相似性的分数级融合,对最相似的k个面孔进行重新排序。为了进一步提高检索性能,我们开发了一种流形排序算法。在两个大型人脸图像数据库上对所提出的人脸检索系统进行了评估:(i)一个web人脸图像数据库,其中包含超过3,880张1,507个受试者的查询图像和5,000,000个人脸库;(ii)一个人脸数据库,其中包含1,000个受试者的1,000张查询图像和1,000,000个人脸库。实验结果表明,所提出的人脸检索系统能够同时提高大规模人脸检索问题的检索性能(CMC和查准率)和可扩展性。
{"title":"Face retriever: Pre-filtering the gallery via deep neural net","authors":"Dayong Wang, Anil K. Jain","doi":"10.1109/ICB.2015.7139112","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139112","url":null,"abstract":"Face retrieval is an enabling technology for many applications, including automatic face annotation, deduplication, and surveillance. In this paper, we propose a face retrieval system which combines a k-NN search procedure with a COTS matcher (PittPatt1) in a cascaded manner. In particular, given a query face, we first pre-filter the gallery set and find the top-k most similar faces for the query image by using deep facial features that are learned with a deep convolutional neural network. The top-k most similar faces are then re-ranked based on score-level fusion of the similarities between deep features and the COTS matcher. To further boost the retrieval performance, we develop a manifold ranking algorithm. The proposed face retrieval system is evaluated on two large-scale face image databases: (i) a web face image database, which consists of over 3, 880 query images of 1, 507 subjects and a gallery of 5, 000, 000 faces, and (ii) a mugshot database, which consists of 1, 000 query images of 1, 000 subjects and a gallery of 1, 000, 000 faces. Experimental results demonstrate that the proposed face retrieval system can simultaneously improve the retrieval performance (CMC and precision-recall) and scalability for large-scale face retrieval problems.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128144388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Continuous authentication of mobile user: Fusion of face image and inertial Measurement Unit data 移动用户的连续认证:融合人脸图像和惯性测量单元数据
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139043
David Crouse, Hu Han, Deepak Chandra, Brandon Barbello, Anil K. Jain
Mobile devices can carry large amounts of personal data, but are often left unsecured. PIN locks are inconvenient to use and thus have seen low adoption (33% of users). While biometrics are beginning to be used for mobile device authentication, they are used only for initial unlock. Mobile devices secured with only login authentication are still vulnerable to data theft when in an unlocked state. This paper introduces our work on a face-based continuous authentication system that operates in an unobtrusive manner. We present a methodology for fusing mobile device (unconstrained) face capture with gyroscope, accelerometer, and magnetometer data to correct for camera orientation and, by extension, the orientation of the face image. Experiments demonstrate (i) improvement of face recognition accuracy from face orientation correction, and (ii) efficacy of the prototype continuous authentication system.
移动设备可以携带大量个人数据,但往往不安全。PIN锁不方便使用,因此使用率很低(33%的用户)。虽然生物识别技术开始用于移动设备的身份验证,但它们仅用于初始解锁。仅使用登录身份验证保护的移动设备在解锁状态下仍然容易受到数据盗窃的攻击。本文介绍了我们在以不引人注目的方式运行的基于人脸的连续身份验证系统上的工作。我们提出了一种将移动设备(无约束)面部捕捉与陀螺仪、加速度计和磁力计数据融合的方法,以校正相机方向,进而校正面部图像的方向。实验表明:(1)人脸方向校正提高了人脸识别精度;(2)原型连续认证系统的有效性。
{"title":"Continuous authentication of mobile user: Fusion of face image and inertial Measurement Unit data","authors":"David Crouse, Hu Han, Deepak Chandra, Brandon Barbello, Anil K. Jain","doi":"10.1109/ICB.2015.7139043","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139043","url":null,"abstract":"Mobile devices can carry large amounts of personal data, but are often left unsecured. PIN locks are inconvenient to use and thus have seen low adoption (33% of users). While biometrics are beginning to be used for mobile device authentication, they are used only for initial unlock. Mobile devices secured with only login authentication are still vulnerable to data theft when in an unlocked state. This paper introduces our work on a face-based continuous authentication system that operates in an unobtrusive manner. We present a methodology for fusing mobile device (unconstrained) face capture with gyroscope, accelerometer, and magnetometer data to correct for camera orientation and, by extension, the orientation of the face image. Experiments demonstrate (i) improvement of face recognition accuracy from face orientation correction, and (ii) efficacy of the prototype continuous authentication system.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128928457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
A touch-less fingerphoto recognition system for mobile hand-held devices 一种用于移动手持设备的无触摸指纹照片识别系统
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139045
Kamlesh Tiwari, Phalguni Gupta
Fingerphoto is an image of a human finger obtained with the help of an ordinary camera. Its acquisition is convenient and does not require any particular biometric scanner. The high degree of freedom in finger positioning introduces challenges to its recognition. This paper proposes a fingerphoto based human authentication system for mobile hand-held devices by using a non-conventional scale-invariant features. The system utilizes built-in camera of the mobile devices to acquire biometric sample and therefore, eliminates the dependence over specifics scanners. It can successfully handle some of the issues like orientation, rotation and lack of registration at the time of matching. It has achieved CRR of 96.67% and EER of 3.33% which is better than any other system available in the literature.
手指照片是用普通相机拍摄的人类手指的图像。它的获取很方便,不需要任何特定的生物识别扫描仪。手指定位的高度自由度给其识别带来了挑战。本文利用一种非常规的尺度不变特征,提出了一种基于指纹照片的移动手持设备人体身份认证系统。该系统利用移动设备的内置摄像头获取生物特征样本,从而消除了对特定扫描仪的依赖。它可以成功地处理一些问题,如方向,旋转和缺乏匹配时的注册。该系统的CRR为96.67%,EER为3.33%,优于文献中已有的任何系统。
{"title":"A touch-less fingerphoto recognition system for mobile hand-held devices","authors":"Kamlesh Tiwari, Phalguni Gupta","doi":"10.1109/ICB.2015.7139045","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139045","url":null,"abstract":"Fingerphoto is an image of a human finger obtained with the help of an ordinary camera. Its acquisition is convenient and does not require any particular biometric scanner. The high degree of freedom in finger positioning introduces challenges to its recognition. This paper proposes a fingerphoto based human authentication system for mobile hand-held devices by using a non-conventional scale-invariant features. The system utilizes built-in camera of the mobile devices to acquire biometric sample and therefore, eliminates the dependence over specifics scanners. It can successfully handle some of the issues like orientation, rotation and lack of registration at the time of matching. It has achieved CRR of 96.67% and EER of 3.33% which is better than any other system available in the literature.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128929685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Large Margin Coupled Feature Learning for cross-modal face recognition 跨模态人脸识别的大裕度耦合特征学习
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139097
Yi Jin, Jiwen Lu, Q. Ruan
This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.
提出了一种基于大余量耦合特征学习(LMCFL)的跨模态人脸识别方法,从不同模态采集的人脸图像中识别人脸。以往的跨模态人脸识别方法大多使用手工制作的特征描述符来表示人脸,这需要很强的先验知识来进行工程设计,并且不能在特征提取中利用数据自适应特征。在这项工作中,我们提出了一种新的LMCFL方法,通过联合利用每个模态的人脸图像的判别信息和不同模态的人脸图像的相关信息来学习图像像素级的耦合人脸表示。因此,LMCFL可以最大化每个模态下正面人脸对和负面人脸对之间的差值,并最大化不同模态下人脸图像的相关性,从而以判别和数据驱动的方式自动学习具有判别性的人脸特征。我们的LMCFL在两种不同的跨模态人脸识别应用中进行了验证,实验结果证明了我们提出的方法的有效性。
{"title":"Large Margin Coupled Feature Learning for cross-modal face recognition","authors":"Yi Jin, Jiwen Lu, Q. Ruan","doi":"10.1109/ICB.2015.7139097","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139097","url":null,"abstract":"This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
2015 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1