首页 > 最新文献

2015 International Conference on Biometrics (ICB)最新文献

英文 中文
Multi-label CNN based pedestrian attribute learning for soft biometrics 基于多标签CNN的软生物特征行人属性学习
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139070
Jianqing Zhu, Shengcai Liao, Dong Yi, Zhen Lei, S. Li
Recently, pedestrian attributes like gender, age and clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Moreover, we propose an attribute assisted person re-identification method, which fuses attribute distances and low-level feature distances between pairs of person images to improve person re-identification performance. Extensive experiments show: 1) the average attribute classification accuracy of the proposed method is 5.2% and 9.3% higher than the SVM-based method on three public databases, VIPeR and GRID, respectively; 2) the proposed attribute assisted person re-identification method is superior to existing approaches.
近年来,行人的性别、年龄、衣着等属性被作为识别人的软生物特征。与现有方法在预测过程中假设属性的独立性不同,我们提出了一种多标签卷积神经网络(MLCNN)来在统一的框架中预测多个属性。首先,将行人图像大致划分为多个重叠的身体部分,并在多标签卷积神经网络中同时进行整合;其次,对这些部分进行独立过滤,并在成本层进行聚合。代价函数是多个二元属性分类代价函数的组合。此外,我们提出了一种属性辅助的人物再识别方法,该方法融合了人物图像对之间的属性距离和底层特征距离,提高了人物再识别的性能。大量实验表明:1)在VIPeR和GRID三个公共数据库上,本文方法的平均属性分类准确率分别比基于支持向量机的方法高5.2%和9.3%;2)本文提出的属性辅助人再识别方法优于现有方法。
{"title":"Multi-label CNN based pedestrian attribute learning for soft biometrics","authors":"Jianqing Zhu, Shengcai Liao, Dong Yi, Zhen Lei, S. Li","doi":"10.1109/ICB.2015.7139070","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139070","url":null,"abstract":"Recently, pedestrian attributes like gender, age and clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Moreover, we propose an attribute assisted person re-identification method, which fuses attribute distances and low-level feature distances between pairs of person images to improve person re-identification performance. Extensive experiments show: 1) the average attribute classification accuracy of the proposed method is 5.2% and 9.3% higher than the SVM-based method on three public databases, VIPeR and GRID, respectively; 2) the proposed attribute assisted person re-identification method is superior to existing approaches.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120953405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 121
Gait regeneration for recognition 步态识别再生
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139048
D. Muramatsu, Yasushi Makihara, Y. Yagi
Gait recognition has potential to recognize subject in CCTV footages thanks to robustness against image resolution. In the CCTV footage, several body-regions of subjects are, however, often un-observable because of occlusions and/or cutting off caused by limited field of view, and therefore, recognition must be done from a pair of partially observed data. The most popular approach to recognition from partially observed data is matching the data from common observable region. This approach, however, cannot be applied in the cases where the matching pair has no common observable region. We therefore, propose an approach to enable recognition even from the pair with no common observable region. In the proposed approach, we reconstruct entire gait feature from a partial gait feature extracted from the observable region using a subspace-based method, and match the reconstructed entire gait features for recognition. We evaluate the proposed approach against two different datasets. In the best case, the proposed approach achieves recognition accuracy with EER of 16.2% from such a pair.
由于步态识别对图像分辨率的鲁棒性,它有可能在CCTV视频中识别目标。然而,在闭路电视录像中,由于视野有限造成的遮挡和/或切断,受试者的几个身体区域往往无法观察到,因此,必须从一对部分观察到的数据中进行识别。对部分观测数据进行识别,最常用的方法是对共同观测区域的数据进行匹配。然而,这种方法不能应用于匹配对没有共同可观察区域的情况。因此,我们提出了一种方法,即使在没有共同可观察区域的情况下也能进行识别。在该方法中,我们使用基于子空间的方法从可观察区域提取部分步态特征来重建整个步态特征,并对重建的整个步态特征进行匹配以进行识别。我们针对两个不同的数据集评估了所提出的方法。在最佳情况下,该方法的识别准确率为16.2%。
{"title":"Gait regeneration for recognition","authors":"D. Muramatsu, Yasushi Makihara, Y. Yagi","doi":"10.1109/ICB.2015.7139048","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139048","url":null,"abstract":"Gait recognition has potential to recognize subject in CCTV footages thanks to robustness against image resolution. In the CCTV footage, several body-regions of subjects are, however, often un-observable because of occlusions and/or cutting off caused by limited field of view, and therefore, recognition must be done from a pair of partially observed data. The most popular approach to recognition from partially observed data is matching the data from common observable region. This approach, however, cannot be applied in the cases where the matching pair has no common observable region. We therefore, propose an approach to enable recognition even from the pair with no common observable region. In the proposed approach, we reconstruct entire gait feature from a partial gait feature extracted from the observable region using a subspace-based method, and match the reconstructed entire gait features for recognition. We evaluate the proposed approach against two different datasets. In the best case, the proposed approach achieves recognition accuracy with EER of 16.2% from such a pair.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121089900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Fine-grained face verification: Dataset and baseline results 细粒度人脸验证:数据集和基线结果
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139079
Junlin Hu, Jiwen Lu, Yap-Peng Tan
This paper investigates the problem of fine-grained face verification under unconstrained conditions. For the conventional face verification task, the verification model is trained with some positive and negative face pairs, where each positive sample pair contains two face images of the same person while each negative sample pair usually consists of two face images from different subjects. However, in many real applications, facial appearance of the twins looks very similar even if they are considered as a negative pair in face verification. Therefore, it is important to differentiate a given face pair to determine whether it is from the same person or a twins for a practical face verification system because most existing face verification systems fails to work well in such a scenario. In this work, we define the problem as fine-grained face verification and collect an unconstrained face dataset which contains 455 pairs of identical twins to generate negative face pairs to evaluate several baseline verification models for fine-grained unconstrained face verification. Benchmark results on the unsupervised setting and restricted setting show the challenge of the fine-grained face verification in the wild.
研究了无约束条件下的细粒度人脸验证问题。对于传统的人脸验证任务,验证模型是用一些正面和负面的人脸对来训练的,其中每一个正面的样本对包含同一个人的两张人脸图像,而每一个负面的样本对通常包含来自不同主体的两张人脸图像。然而,在许多实际应用中,即使在面部验证中被认为是阴性的一对,双胞胎的面部外观看起来也非常相似。因此,对于实际的人脸验证系统来说,区分给定的人脸对以确定它是来自同一个人还是双胞胎是很重要的,因为大多数现有的人脸验证系统在这种情况下都不能很好地工作。在这项工作中,我们将问题定义为细粒度人脸验证,并收集包含455对同卵双胞胎的无约束人脸数据集来生成负人脸对,以评估几种用于细粒度无约束人脸验证的基线验证模型。在无监督设置和限制设置下的基准测试结果表明,在野外进行细粒度人脸验证是一项挑战。
{"title":"Fine-grained face verification: Dataset and baseline results","authors":"Junlin Hu, Jiwen Lu, Yap-Peng Tan","doi":"10.1109/ICB.2015.7139079","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139079","url":null,"abstract":"This paper investigates the problem of fine-grained face verification under unconstrained conditions. For the conventional face verification task, the verification model is trained with some positive and negative face pairs, where each positive sample pair contains two face images of the same person while each negative sample pair usually consists of two face images from different subjects. However, in many real applications, facial appearance of the twins looks very similar even if they are considered as a negative pair in face verification. Therefore, it is important to differentiate a given face pair to determine whether it is from the same person or a twins for a practical face verification system because most existing face verification systems fails to work well in such a scenario. In this work, we define the problem as fine-grained face verification and collect an unconstrained face dataset which contains 455 pairs of identical twins to generate negative face pairs to evaluate several baseline verification models for fine-grained unconstrained face verification. Benchmark results on the unsupervised setting and restricted setting show the challenge of the fine-grained face verification in the wild.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122547733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Live face video vs. spoof face video: Use of moiré patterns to detect replay video attacks 实时人脸视频vs.欺骗人脸视频:使用监控模式来检测重放视频攻击
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139082
Keyurkumar Patel, Hu Han, Anil K. Jain, Greg Ott
With the wide deployment of face recognition systems in applications from border control to mobile device unlocking, the combat of face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays and 3D masks. We address the problem of facial spoofing detection against replay attacks based on the analysis of aliasing in spoof face videos. The application domain of interest is mobile phone unlock. We analyze the moiré pattern aliasing that commonly appears during the recapture of video or photo replays on a screen in different channels (R, G, B and grayscale) and regions (the whole frame, detected face, and facial component between the nose and chin). Multi-scale LBP and DSIFT features are used to represent the characteristics of moiré patterns that differentiate a replayed spoof face from a live face (face present). Experimental results on Idiap replay-attack and CASIA databases as well as a database collected in our laboratory (RAFS), which is based on the MSU-FSD database, shows that the proposed approach is very effective in face spoof detection for both cross-database, and intra-database testing scenarios.
随着人脸识别系统在从边境管制到移动设备解锁等应用中的广泛应用,人脸欺骗攻击的打击需要越来越多的关注;这种攻击可以通过打印照片、视频回放和3D面具轻松发动。我们在分析人脸欺骗视频混叠的基础上,解决了人脸欺骗检测对重放攻击的问题。感兴趣的应用领域是手机解锁。我们分析了在屏幕上不同通道(R、G、B和灰度)和区域(整个帧、检测到的人脸以及鼻子和下巴之间的面部成分)重拍视频或照片时常见的moir模式混叠。使用多尺度LBP和DSIFT特征来表示将重放的恶搞人脸与真实人脸(人脸在场)区分开来的摩尔模式特征。在Idiap重放攻击和CASIA数据库以及我们实验室收集的基于MSU-FSD数据库的数据库(RAFS)上的实验结果表明,该方法在跨数据库和数据库内测试场景下都是非常有效的人脸欺骗检测方法。
{"title":"Live face video vs. spoof face video: Use of moiré patterns to detect replay video attacks","authors":"Keyurkumar Patel, Hu Han, Anil K. Jain, Greg Ott","doi":"10.1109/ICB.2015.7139082","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139082","url":null,"abstract":"With the wide deployment of face recognition systems in applications from border control to mobile device unlocking, the combat of face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays and 3D masks. We address the problem of facial spoofing detection against replay attacks based on the analysis of aliasing in spoof face videos. The application domain of interest is mobile phone unlock. We analyze the moiré pattern aliasing that commonly appears during the recapture of video or photo replays on a screen in different channels (R, G, B and grayscale) and regions (the whole frame, detected face, and facial component between the nose and chin). Multi-scale LBP and DSIFT features are used to represent the characteristics of moiré patterns that differentiate a replayed spoof face from a live face (face present). Experimental results on Idiap replay-attack and CASIA databases as well as a database collected in our laboratory (RAFS), which is based on the MSU-FSD database, shows that the proposed approach is very effective in face spoof detection for both cross-database, and intra-database testing scenarios.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121346906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
Sensor ageing impact on finger-vein recognition 传感器老化对手指静脉识别的影响
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139084
Christof Kauba, A. Uhl
The impact of sensor ageing related pixel defects on the performance of finger vein based recognition systems in terms of the EER (Equal Error Rate) is investigated. Therefore the defect growth rate per year for the sensor used to capture the data set was estimated. Based on this estimation an experimental study using several simulations with increasing numbers of stuck and hot pixels were done to determine the impact on different finger-vein matching schemes. Whereas for a reasonable number of pixel defects none of the methods is considerably influenced, the performance of several schemes drops if the number of defects is increased. The impact can be reduced using a simple denoising filter.
从等效错误率的角度研究了传感器老化相关的像素缺陷对基于手指静脉的识别系统性能的影响。因此,用于捕获数据集的传感器每年的缺陷增长率被估计出来。在此基础上,通过增加卡像素和热像素的模拟实验研究了不同的手指静脉匹配方案对指纹静脉匹配的影响。然而,对于合理数量的像素缺陷,这些方法都不会受到很大的影响,当缺陷数量增加时,几种方法的性能都会下降。使用简单的去噪滤波器可以降低影响。
{"title":"Sensor ageing impact on finger-vein recognition","authors":"Christof Kauba, A. Uhl","doi":"10.1109/ICB.2015.7139084","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139084","url":null,"abstract":"The impact of sensor ageing related pixel defects on the performance of finger vein based recognition systems in terms of the EER (Equal Error Rate) is investigated. Therefore the defect growth rate per year for the sensor used to capture the data set was estimated. Based on this estimation an experimental study using several simulations with increasing numbers of stuck and hot pixels were done to determine the impact on different finger-vein matching schemes. Whereas for a reasonable number of pixel defects none of the methods is considerably influenced, the performance of several schemes drops if the number of defects is increased. The impact can be reduced using a simple denoising filter.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121393443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Large Margin Coupled Feature Learning for cross-modal face recognition 跨模态人脸识别的大裕度耦合特征学习
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139097
Yi Jin, Jiwen Lu, Q. Ruan
This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.
提出了一种基于大余量耦合特征学习(LMCFL)的跨模态人脸识别方法,从不同模态采集的人脸图像中识别人脸。以往的跨模态人脸识别方法大多使用手工制作的特征描述符来表示人脸,这需要很强的先验知识来进行工程设计,并且不能在特征提取中利用数据自适应特征。在这项工作中,我们提出了一种新的LMCFL方法,通过联合利用每个模态的人脸图像的判别信息和不同模态的人脸图像的相关信息来学习图像像素级的耦合人脸表示。因此,LMCFL可以最大化每个模态下正面人脸对和负面人脸对之间的差值,并最大化不同模态下人脸图像的相关性,从而以判别和数据驱动的方式自动学习具有判别性的人脸特征。我们的LMCFL在两种不同的跨模态人脸识别应用中进行了验证,实验结果证明了我们提出的方法的有效性。
{"title":"Large Margin Coupled Feature Learning for cross-modal face recognition","authors":"Yi Jin, Jiwen Lu, Q. Ruan","doi":"10.1109/ICB.2015.7139097","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139097","url":null,"abstract":"This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A touch-less fingerphoto recognition system for mobile hand-held devices 一种用于移动手持设备的无触摸指纹照片识别系统
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139045
Kamlesh Tiwari, Phalguni Gupta
Fingerphoto is an image of a human finger obtained with the help of an ordinary camera. Its acquisition is convenient and does not require any particular biometric scanner. The high degree of freedom in finger positioning introduces challenges to its recognition. This paper proposes a fingerphoto based human authentication system for mobile hand-held devices by using a non-conventional scale-invariant features. The system utilizes built-in camera of the mobile devices to acquire biometric sample and therefore, eliminates the dependence over specifics scanners. It can successfully handle some of the issues like orientation, rotation and lack of registration at the time of matching. It has achieved CRR of 96.67% and EER of 3.33% which is better than any other system available in the literature.
手指照片是用普通相机拍摄的人类手指的图像。它的获取很方便,不需要任何特定的生物识别扫描仪。手指定位的高度自由度给其识别带来了挑战。本文利用一种非常规的尺度不变特征,提出了一种基于指纹照片的移动手持设备人体身份认证系统。该系统利用移动设备的内置摄像头获取生物特征样本,从而消除了对特定扫描仪的依赖。它可以成功地处理一些问题,如方向,旋转和缺乏匹配时的注册。该系统的CRR为96.67%,EER为3.33%,优于文献中已有的任何系统。
{"title":"A touch-less fingerphoto recognition system for mobile hand-held devices","authors":"Kamlesh Tiwari, Phalguni Gupta","doi":"10.1109/ICB.2015.7139045","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139045","url":null,"abstract":"Fingerphoto is an image of a human finger obtained with the help of an ordinary camera. Its acquisition is convenient and does not require any particular biometric scanner. The high degree of freedom in finger positioning introduces challenges to its recognition. This paper proposes a fingerphoto based human authentication system for mobile hand-held devices by using a non-conventional scale-invariant features. The system utilizes built-in camera of the mobile devices to acquire biometric sample and therefore, eliminates the dependence over specifics scanners. It can successfully handle some of the issues like orientation, rotation and lack of registration at the time of matching. It has achieved CRR of 96.67% and EER of 3.33% which is better than any other system available in the literature.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128929685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Continuous authentication of mobile user: Fusion of face image and inertial Measurement Unit data 移动用户的连续认证:融合人脸图像和惯性测量单元数据
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139043
David Crouse, Hu Han, Deepak Chandra, Brandon Barbello, Anil K. Jain
Mobile devices can carry large amounts of personal data, but are often left unsecured. PIN locks are inconvenient to use and thus have seen low adoption (33% of users). While biometrics are beginning to be used for mobile device authentication, they are used only for initial unlock. Mobile devices secured with only login authentication are still vulnerable to data theft when in an unlocked state. This paper introduces our work on a face-based continuous authentication system that operates in an unobtrusive manner. We present a methodology for fusing mobile device (unconstrained) face capture with gyroscope, accelerometer, and magnetometer data to correct for camera orientation and, by extension, the orientation of the face image. Experiments demonstrate (i) improvement of face recognition accuracy from face orientation correction, and (ii) efficacy of the prototype continuous authentication system.
移动设备可以携带大量个人数据,但往往不安全。PIN锁不方便使用,因此使用率很低(33%的用户)。虽然生物识别技术开始用于移动设备的身份验证,但它们仅用于初始解锁。仅使用登录身份验证保护的移动设备在解锁状态下仍然容易受到数据盗窃的攻击。本文介绍了我们在以不引人注目的方式运行的基于人脸的连续身份验证系统上的工作。我们提出了一种将移动设备(无约束)面部捕捉与陀螺仪、加速度计和磁力计数据融合的方法,以校正相机方向,进而校正面部图像的方向。实验表明:(1)人脸方向校正提高了人脸识别精度;(2)原型连续认证系统的有效性。
{"title":"Continuous authentication of mobile user: Fusion of face image and inertial Measurement Unit data","authors":"David Crouse, Hu Han, Deepak Chandra, Brandon Barbello, Anil K. Jain","doi":"10.1109/ICB.2015.7139043","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139043","url":null,"abstract":"Mobile devices can carry large amounts of personal data, but are often left unsecured. PIN locks are inconvenient to use and thus have seen low adoption (33% of users). While biometrics are beginning to be used for mobile device authentication, they are used only for initial unlock. Mobile devices secured with only login authentication are still vulnerable to data theft when in an unlocked state. This paper introduces our work on a face-based continuous authentication system that operates in an unobtrusive manner. We present a methodology for fusing mobile device (unconstrained) face capture with gyroscope, accelerometer, and magnetometer data to correct for camera orientation and, by extension, the orientation of the face image. Experiments demonstrate (i) improvement of face recognition accuracy from face orientation correction, and (ii) efficacy of the prototype continuous authentication system.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128928457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
Crowd powered latent Fingerprint Identification: Fusing AFIS with examiner markups 群体动力潜在指纹识别:融合AFIS与审查员标记
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139062
Sunpreet S. Arora, Kai Cao, Anil K. Jain, Gregoire Michaud
Automatic matching of poor quality latent fingerprints to rolled/slap fingerprints using an Automated Fingerprint Identification System (AFIS) is still far from satisfactory. Therefore, it is a common practice to have a latent examiner mark features on a latent for improving the hit rate of the AFIS. We propose a synergistic crowd powered latent identification framework where multiple latent examiners and the AFIS work in conjunction with each other to boost the identification accuracy of the AFIS. Given a latent, the candidate list output by the AFIS is used to determine the likelihood that a hit at rank-1 was found. A latent for which this likelihood is low is crowdsourced to a pool of latent examiners for feature markup. The manual markups are then input to the AFIS to increase the likelihood of making a hit in the reference database. Experimental results show that the fusion of an AFIS with examiner markups improves the rank-1 identification accuracy of the AFIS by 7.75% (using six markups) on the 500 ppi NIST SD27, 11.37% (using two markups) on the 1000 ppi ELFT-EFS public challenge database, and by 2.5% (using a single markup) on the 1000 ppi RS&A database against 250,000 rolled prints in the reference database.
使用自动指纹识别系统(AFIS)自动匹配质量差的潜在指纹和卷/巴掌指纹仍然远远不能令人满意。因此,为了提高AFIS的命中率,通常的做法是让潜考官在潜上标记特征。我们提出了一个协同的群体驱动的潜在识别框架,其中多个潜在审查员和AFIS相互协作,以提高AFIS的识别准确性。给定一个潜在值,AFIS输出的候选列表用于确定在rank-1找到命中的可能性。可能性较低的潜在对象被众包给一群潜在的特征标记审查员。然后将手动标记输入到AFIS,以增加在参考数据库中命中的可能性。实验结果表明,AFIS与审查员标记的融合在500 ppi的NIST SD27上提高了7.75%(使用6个标记),在1000 ppi的ELFT-EFS公共挑战数据库上提高了11.37%(使用2个标记),在1000 ppi的RS&A数据库上提高了2.5%(使用单个标记),对照参考数据库中的250,000卷印刷品。
{"title":"Crowd powered latent Fingerprint Identification: Fusing AFIS with examiner markups","authors":"Sunpreet S. Arora, Kai Cao, Anil K. Jain, Gregoire Michaud","doi":"10.1109/ICB.2015.7139062","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139062","url":null,"abstract":"Automatic matching of poor quality latent fingerprints to rolled/slap fingerprints using an Automated Fingerprint Identification System (AFIS) is still far from satisfactory. Therefore, it is a common practice to have a latent examiner mark features on a latent for improving the hit rate of the AFIS. We propose a synergistic crowd powered latent identification framework where multiple latent examiners and the AFIS work in conjunction with each other to boost the identification accuracy of the AFIS. Given a latent, the candidate list output by the AFIS is used to determine the likelihood that a hit at rank-1 was found. A latent for which this likelihood is low is crowdsourced to a pool of latent examiners for feature markup. The manual markups are then input to the AFIS to increase the likelihood of making a hit in the reference database. Experimental results show that the fusion of an AFIS with examiner markups improves the rank-1 identification accuracy of the AFIS by 7.75% (using six markups) on the 500 ppi NIST SD27, 11.37% (using two markups) on the 1000 ppi ELFT-EFS public challenge database, and by 2.5% (using a single markup) on the 1000 ppi RS&A database against 250,000 rolled prints in the reference database.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132734425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Color local phase quantization (CLPQ)- A new face representation approach using color texture cues 颜色局部相位量化(CLPQ)-一种使用颜色纹理线索的新的人脸表示方法
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139049
Akanksha Joshi, A. Gangwar
In this paper, we introduce new methods to encode color local texture features for enhanced face representation. In particular, we first propose a novel descriptor; color local phase quantization (CLPQ), which incorporates (channel-wise) unichrome and (cross channel) opponent features in frequency domain. Furthermore, we extend the CLPQ descriptor to multiple scales i.e. multiscale color LPQ (MS-CLPQ), which exploits the complementary information in different scales. In addition, we extend the multispectral LBP to multiple scales and propose multiscale color LBP (MS-CLBP), which provides illumination invariance and extracts features in spatial domain. To formulate the proposed color local texture descriptors, the unichrome and opponent features are combined using image-level fusion strategy and final representation of the descriptors is obtained using concatenation of regional histograms. To reduce high dimensionality of features, we applied Direct LDA, which also enhances the discrimination ability of the descriptors. The experimental analysis illustrates that proposed MS-CLPQ approach significantly outperforms other descriptor based approaches for face recognition (FR) and score level fusion of MS-CLPQ and MS-CLBP further improves the FR performance and robustness. The validity of the proposed approaches is ascertained by providing comprehensive comparisons on three challenging face databases; FRGC 2.0, GTDB and PUT.
在本文中,我们引入了新的方法来编码颜色局部纹理特征,以增强人脸表示。特别地,我们首先提出了一个新的描述符;彩色局部相位量化(CLPQ),它在频域上结合了(信道)单色和(跨信道)对手特征。此外,我们将CLPQ描述符扩展到多尺度,即多尺度颜色LPQ (MS-CLPQ),它利用了不同尺度上的互补信息。此外,我们将多光谱LBP扩展到多尺度,提出了多尺度彩色LBP (MS-CLBP),该方法提供了光照不变性和空间域特征提取。为了形成所提出的颜色局部纹理描述符,使用图像级融合策略将单色特征和对手特征结合起来,并使用区域直方图拼接获得描述符的最终表示。为了降低特征的高维数,我们采用了直接LDA,提高了描述符的识别能力。实验分析表明,所提出的MS-CLPQ方法显著优于其他基于描述符的人脸识别方法,并且MS-CLPQ和MS-CLBP的评分水平融合进一步提高了人脸识别性能和鲁棒性。通过对三个具有挑战性的人脸数据库进行综合比较,确定了所提出方法的有效性;frgc2.0, GTDB和PUT。
{"title":"Color local phase quantization (CLPQ)- A new face representation approach using color texture cues","authors":"Akanksha Joshi, A. Gangwar","doi":"10.1109/ICB.2015.7139049","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139049","url":null,"abstract":"In this paper, we introduce new methods to encode color local texture features for enhanced face representation. In particular, we first propose a novel descriptor; color local phase quantization (CLPQ), which incorporates (channel-wise) unichrome and (cross channel) opponent features in frequency domain. Furthermore, we extend the CLPQ descriptor to multiple scales i.e. multiscale color LPQ (MS-CLPQ), which exploits the complementary information in different scales. In addition, we extend the multispectral LBP to multiple scales and propose multiscale color LBP (MS-CLBP), which provides illumination invariance and extracts features in spatial domain. To formulate the proposed color local texture descriptors, the unichrome and opponent features are combined using image-level fusion strategy and final representation of the descriptors is obtained using concatenation of regional histograms. To reduce high dimensionality of features, we applied Direct LDA, which also enhances the discrimination ability of the descriptors. The experimental analysis illustrates that proposed MS-CLPQ approach significantly outperforms other descriptor based approaches for face recognition (FR) and score level fusion of MS-CLPQ and MS-CLBP further improves the FR performance and robustness. The validity of the proposed approaches is ascertained by providing comprehensive comparisons on three challenging face databases; FRGC 2.0, GTDB and PUT.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131230893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2015 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1