首页 > 最新文献

2019 International Conference on Biometrics (ICB)最新文献

英文 中文
FaceQnet: Quality Assessment for Face Recognition based on Deep Learning FaceQnet:基于深度学习的人脸识别质量评估
Pub Date : 2019-04-03 DOI: 10.1109/ICB45273.2019.8987255
J. Hernandez-Ortega, Javier Galbally, Julian Fierrez, Rudolf Haraksim, Laurent Beslay
In this paper we develop a Quality Assessment approach for face recognition based on deep learning. The method consists of a Convolutional Neural Network, FaceQnet, that is used to predict the suitability of a specific input image for face recognition purposes. The training of FaceQnet is done using the VGGFace2 database. We employ the BioLab-ICAO framework for labeling the VGGFace2 images with quality information related to their ICAO compliance level. The groundtruth quality labels are obtained using FaceNet to generate comparison scores. We employ the groundtruth data to fine-tune a ResNet-based CNN, making it capable of returning a numerical quality measure for each input image. Finally, we verify if the FaceQnet scores are suitable to predict the expected performance when employing a specific image for face recognition with a COTS face recognition system. Several conclusions can be drawn from this work, most notably: 1) we managed to employ an existing ICAO compliance framework and a pretrained CNN to automatically label data with quality information, 2) we trained FaceQnet for quality estimation by fine-tuning a pre-trained face recognition network (ResNet-50), and 3) we have shown that the predictions from FaceQnet are highly correlated with the face recognition accuracy of a state-of-the-art commercial system not used during development. FaceQnet is publicly available in GitHub1.
本文提出了一种基于深度学习的人脸识别质量评估方法。该方法由一个卷积神经网络FaceQnet组成,用于预测特定输入图像对面部识别的适用性。FaceQnet的训练是使用VGGFace2数据库完成的。我们采用BioLab-ICAO框架对VGGFace2图像进行标记,并提供与ICAO合规水平相关的质量信息。使用FaceNet获得真实质量标签以生成比较分数。我们使用groundtruth数据对基于resnet的CNN进行微调,使其能够为每个输入图像返回数字质量度量。最后,我们验证了FaceQnet分数是否适用于使用COTS人脸识别系统使用特定图像进行人脸识别时的预期性能。从这项工作中可以得出几个结论,最值得注意的是:1)我们设法使用现有的ICAO合规框架和预训练的CNN来自动标记带有质量信息的数据;2)我们通过微调预训练的人脸识别网络(ResNet-50)来训练FaceQnet进行质量估计;3)我们已经证明,来自FaceQnet的预测与开发过程中未使用的最先进商业系统的人脸识别精度高度相关。FaceQnet在GitHub1中公开可用。
{"title":"FaceQnet: Quality Assessment for Face Recognition based on Deep Learning","authors":"J. Hernandez-Ortega, Javier Galbally, Julian Fierrez, Rudolf Haraksim, Laurent Beslay","doi":"10.1109/ICB45273.2019.8987255","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987255","url":null,"abstract":"In this paper we develop a Quality Assessment approach for face recognition based on deep learning. The method consists of a Convolutional Neural Network, FaceQnet, that is used to predict the suitability of a specific input image for face recognition purposes. The training of FaceQnet is done using the VGGFace2 database. We employ the BioLab-ICAO framework for labeling the VGGFace2 images with quality information related to their ICAO compliance level. The groundtruth quality labels are obtained using FaceNet to generate comparison scores. We employ the groundtruth data to fine-tune a ResNet-based CNN, making it capable of returning a numerical quality measure for each input image. Finally, we verify if the FaceQnet scores are suitable to predict the expected performance when employing a specific image for face recognition with a COTS face recognition system. Several conclusions can be drawn from this work, most notably: 1) we managed to employ an existing ICAO compliance framework and a pretrained CNN to automatically label data with quality information, 2) we trained FaceQnet for quality estimation by fine-tuning a pre-trained face recognition network (ResNet-50), and 3) we have shown that the predictions from FaceQnet are highly correlated with the face recognition accuracy of a state-of-the-art commercial system not used during development. FaceQnet is publicly available in GitHub1.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128360543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
OGCTL: Occlusion-guided compact template learning for ensemble deep network-based pose-invariant face recognition 基于集成深度网络的姿态不变人脸识别的闭塞引导紧凑模板学习
Pub Date : 2019-03-12 DOI: 10.1109/ICB45273.2019.8987272
Yuhang Wu, I. Kakadiaris
Concatenation of the deep network representations extracted from different facial patches helps to improve face recognition performance. However, the concatenated facial template increases in size and contains redundant information. Previous solutions aim to reduce the dimensionality of the facial template without considering the occlusion pattern of the facial patches. In this paper, we propose an occlusion-guided compact template learning (OGCTL) approach that only uses the information from visible patches to construct the compact template. The compact face representation is not sensitive to the number of patches that are used to construct the facial template, and is more suitable for incorporating the information from different view angles for image-set based face recognition. Instead of using occlusion masks in face matching (e.g., DPRFS [38]), the proposed method uses occlusion masks in template construction and achieves significantly better image-set based face verification performance on a challenging database with a template size that is an order-of-magnitude smaller than DPRFS.
从不同的人脸补丁中提取的深度网络表示的串联有助于提高人脸识别性能。然而,拼接后的面部模板会增大尺寸并包含冗余信息。先前的解决方案旨在降低面部模板的维数,而不考虑面部斑块的遮挡模式。在本文中,我们提出了一种遮挡引导的紧凑模板学习(OGCTL)方法,该方法仅使用来自可见斑块的信息来构建紧凑模板。紧凑的人脸表示对构建人脸模板所用的小块数量不敏感,更适合将不同视角的信息融合到基于图像集的人脸识别中。与在人脸匹配中使用遮挡遮罩(例如,DPRFS[38])不同,本文提出的方法在模板构建中使用遮挡遮罩,并在模板大小比DPRFS小一个数量级的具有挑战性的数据库上实现了明显更好的基于图像集的人脸验证性能。
{"title":"OGCTL: Occlusion-guided compact template learning for ensemble deep network-based pose-invariant face recognition","authors":"Yuhang Wu, I. Kakadiaris","doi":"10.1109/ICB45273.2019.8987272","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987272","url":null,"abstract":"Concatenation of the deep network representations extracted from different facial patches helps to improve face recognition performance. However, the concatenated facial template increases in size and contains redundant information. Previous solutions aim to reduce the dimensionality of the facial template without considering the occlusion pattern of the facial patches. In this paper, we propose an occlusion-guided compact template learning (OGCTL) approach that only uses the information from visible patches to construct the compact template. The compact face representation is not sensitive to the number of patches that are used to construct the facial template, and is more suitable for incorporating the information from different view angles for image-set based face recognition. Instead of using occlusion masks in face matching (e.g., DPRFS [38]), the proposed method uses occlusion masks in template construction and achieves significantly better image-set based face verification performance on a challenging database with a template size that is an order-of-magnitude smaller than DPRFS.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122714497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Unconstrained Ear Recognition Challenge 2019 2019年无约束耳识别挑战赛
Pub Date : 2019-03-11 DOI: 10.1109/ICB45273.2019.8987337
Ž. Emeršič, S. V. A. Kumar, B. Harish, Weronika Gutfeter, J. Khiarak, A. Pacut, E. Hansley, Maurício Pamplona Segundo, Sudeep Sarkar, Hyeon-Nam Park, G. Nam, Ig-Jae Kim, S. G. Sangodkar, Umit Kacar, M. Kirci, Li Yuan, Jishou Yuan, Haonan Zhao, Fei Lu, Junying Mao, Xiaoshuang Zhang, Dogucan Yaman, Fevziye Irem Eyiokur, Kadir Bulut Özler, H. K. Ekenel, D. P. Chowdhury, Sambit Bakshi, B. Majhi, P. Peer, V. Štruc
This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behavior when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area.
本文概述了2019年无约束耳识别挑战(UERC),这是一系列小组基准测试工作中的第二个,这些工作围绕着从非受控环境中捕获的耳图像中识别人的问题展开。挑战的目标是评估现有耳朵识别技术在具有挑战性的大规模耳朵数据集上的性能,并从不同角度分析该技术的性能,例如对未见数据特征的泛化能力,对旋转、遮挡和图像分辨率的敏感性以及基于人口统计学标准(即性别和种族)选择的子群体的性能偏差。来自12个机构的研究小组参加了比赛,共提交了13种识别方法,从基于描述符的方法到深度学习模型。大多数提交都集中在基于集成的方法上,这些方法结合了来自多个深度模型的表示或手工制作的图像描述符。我们的分析表明,结合深度学习模型的方法明显优于仅依赖手工制作描述符的技术,尽管两组技术在对各种协变量(如遮挡的存在、(头部)姿势的变化或图像分辨率的可变性)的鲁棒性方面表现出相似的行为。挑战的结果还表明,自2017年第一次UERC以来,已经取得了相当大的进展,但在这一领域仍有足够的进一步研究空间。
{"title":"The Unconstrained Ear Recognition Challenge 2019","authors":"Ž. Emeršič, S. V. A. Kumar, B. Harish, Weronika Gutfeter, J. Khiarak, A. Pacut, E. Hansley, Maurício Pamplona Segundo, Sudeep Sarkar, Hyeon-Nam Park, G. Nam, Ig-Jae Kim, S. G. Sangodkar, Umit Kacar, M. Kirci, Li Yuan, Jishou Yuan, Haonan Zhao, Fei Lu, Junying Mao, Xiaoshuang Zhang, Dogucan Yaman, Fevziye Irem Eyiokur, Kadir Bulut Özler, H. K. Ekenel, D. P. Chowdhury, Sambit Bakshi, B. Majhi, P. Peer, V. Štruc","doi":"10.1109/ICB45273.2019.8987337","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987337","url":null,"abstract":"This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behavior when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130997915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
RoPAD: Robust Presentation Attack Detection through Unsupervised Adversarial Invariance 基于无监督对抗不变性的鲁棒表示攻击检测
Pub Date : 2019-03-08 DOI: 10.1109/ICB45273.2019.8987276
Ayush Jaiswal, Shuai Xia, I. Masi, Wael AbdAlmageed
For enterprise, personal and societal applications, there is now an increasing demand for automated authentication of identity from images using computer vision. However, current authentication technologies are still vulnerable to presentation attacks. We present RoPAD, an end-to-end deep learning model for presentation attack detection that employs unsupervised adversarial invariance to ignore visual distractors in images for increased robustness and reduced overfitting. Experiments show that the proposed framework exhibits state-of-the-art performance on presentation attack detection on several benchmark datasets.
对于企业、个人和社会应用,现在越来越需要使用计算机视觉从图像中自动验证身份。然而,当前的身份验证技术仍然容易受到表示攻击。我们提出了RoPAD,这是一种端到端深度学习模型,用于表示攻击检测,它采用无监督对抗不变性来忽略图像中的视觉干扰,以提高鲁棒性并减少过拟合。实验表明,该框架在多个基准数据集上表现出最先进的表示攻击检测性能。
{"title":"RoPAD: Robust Presentation Attack Detection through Unsupervised Adversarial Invariance","authors":"Ayush Jaiswal, Shuai Xia, I. Masi, Wael AbdAlmageed","doi":"10.1109/ICB45273.2019.8987276","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987276","url":null,"abstract":"For enterprise, personal and societal applications, there is now an increasing demand for automated authentication of identity from images using computer vision. However, current authentication technologies are still vulnerable to presentation attacks. We present RoPAD, an end-to-end deep learning model for presentation attack detection that employs unsupervised adversarial invariance to ignore visual distractors in images for increased robustness and reduced overfitting. Experiments show that the proposed framework exhibits state-of-the-art performance on presentation attack detection on several benchmark datasets.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129120199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Video Face Recognition: Component-wise Feature Aggregation Network (C-FAN) 视频人脸识别:组件特征聚合网络(C-FAN)
Pub Date : 2019-02-19 DOI: 10.1109/ICB45273.2019.8987385
Sixue Gong, Yichun Shi, N. Kalka, Anil K. Jain
We propose a new approach to video face recognition. Our component-wise feature aggregation network (C-FAN) accepts a set of face images of a subject as an input, and outputs a single feature vector as the face representation of the set for the recognition task. The whole network is trained in two steps: (i) train a base CNN for still image face recognition; (ii) add an aggregation module to the base network to learn the quality value for each feature component, which adaptively aggregates deep feature vectors into a single vector to represent the face in a video. C-FAN automatically learns to retain salient face features with high quality scores while suppressing features with low quality scores. The experimental results on three benchmark datasets, YouTube Faces [39], IJB-A [13], and IJB-S [12] show that the proposed C-FAN network is capable of generating a compact feature vector with 512 dimensions for a video sequence by efficiently aggregating feature vectors of all the video frames to achieve state of the art performance.
提出了一种新的视频人脸识别方法。我们的组件智能特征聚合网络(C-FAN)接受一组受试者的面部图像作为输入,并输出单个特征向量作为识别任务集的面部表示。整个网络分为两个步骤进行训练:(i)训练用于静止图像人脸识别的基础CNN;(ii)在基网络中加入聚合模块,学习每个特征分量的质量值,自适应地将深度特征向量聚合为单个向量,表示视频中的人脸。C-FAN自动学习保留高质量分数的显著特征,同时抑制低质量分数的特征。在YouTube Faces[39]、ikb - a[13]和ikb - s[12]三个基准数据集上的实验结果表明,所提出的C-FAN网络能够通过有效地聚合所有视频帧的特征向量,为视频序列生成512维的紧凑特征向量,达到最优性能。
{"title":"Video Face Recognition: Component-wise Feature Aggregation Network (C-FAN)","authors":"Sixue Gong, Yichun Shi, N. Kalka, Anil K. Jain","doi":"10.1109/ICB45273.2019.8987385","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987385","url":null,"abstract":"We propose a new approach to video face recognition. Our component-wise feature aggregation network (C-FAN) accepts a set of face images of a subject as an input, and outputs a single feature vector as the face representation of the set for the recognition task. The whole network is trained in two steps: (i) train a base CNN for still image face recognition; (ii) add an aggregation module to the base network to learn the quality value for each feature component, which adaptively aggregates deep feature vectors into a single vector to represent the face in a video. C-FAN automatically learns to retain salient face features with high quality scores while suppressing features with low quality scores. The experimental results on three benchmark datasets, YouTube Faces [39], IJB-A [13], and IJB-S [12] show that the proposed C-FAN network is capable of generating a compact feature vector with 512 dimensions for a video sequence by efficiently aggregating feature vectors of all the video frames to achieve state of the art performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125279793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Periocular Recognition in the Wild with Orthogonal Combination of Local Binary Coded Pattern in Dual-stream Convolutional Neural Network 基于双流卷积神经网络局部二值编码模式正交组合的野外眼周识别
Pub Date : 2019-02-18 DOI: 10.1109/ICB45273.2019.8987278
L. Tiong, A. Teoh, Yunli Lee
In spite of the advancements made in the periocular recognition, the dataset and periocular recognition in the wild remains a challenge. In this paper, we propose a multilayer fusion approach by means of a pair of shared parameters (dual-stream) convolutional neural network where each network accepts RGB data and a novel colour-based texture descriptor, namely Orthogonal Combination-Local Binary Coded Pattern (OC-LBCP) for periocular recognition in the wild. Specifically, two distinct late-fusion layers are introduced in the dual-stream network to aggregate the RGB data and OC-LBCP. Thus, the network beneficial from this new feature of the late-fusion layers for accuracy performance gain. We also introduce and share a new dataset for periocular in the wild, namely Ethnic-ocular dataset for benchmarking. The proposed network has also been assessed on one publicly available dataset, namely UBIPr. The proposed network outperforms several competing approaches on these datasets.
尽管在眼周识别方面取得了一些进展,但野外数据集和眼周识别仍然是一个挑战。在本文中,我们提出了一种多层融合方法,通过一对共享参数(双流)卷积神经网络,其中每个网络接受RGB数据和一种新的基于颜色的纹理描述符,即正交组合-局部二进制编码模式(OC-LBCP),用于野外眼周识别。具体来说,在双流网络中引入了两个不同的后期融合层来聚合RGB数据和OC-LBCP。因此,网络受益于后期融合层的这一新特性,以提高精度和性能。我们还介绍并分享了一种新的野生眼周数据集,即用于基准测试的ethical -ocular数据集。提议的网络也在一个公开可用的数据集上进行了评估,即UBIPr。所提出的网络在这些数据集上优于几种竞争方法。
{"title":"Periocular Recognition in the Wild with Orthogonal Combination of Local Binary Coded Pattern in Dual-stream Convolutional Neural Network","authors":"L. Tiong, A. Teoh, Yunli Lee","doi":"10.1109/ICB45273.2019.8987278","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987278","url":null,"abstract":"In spite of the advancements made in the periocular recognition, the dataset and periocular recognition in the wild remains a challenge. In this paper, we propose a multilayer fusion approach by means of a pair of shared parameters (dual-stream) convolutional neural network where each network accepts RGB data and a novel colour-based texture descriptor, namely Orthogonal Combination-Local Binary Coded Pattern (OC-LBCP) for periocular recognition in the wild. Specifically, two distinct late-fusion layers are introduced in the dual-stream network to aggregate the RGB data and OC-LBCP. Thus, the network beneficial from this new feature of the late-fusion layers for accuracy performance gain. We also introduce and share a new dataset for periocular in the wild, namely Ethnic-ocular dataset for benchmarking. The proposed network has also been assessed on one publicly available dataset, namely UBIPr. The proposed network outperforms several competing approaches on these datasets.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134614048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Actions Speak Louder Than (Pass)words: Passive Authentication of Smartphone* Users via Deep Temporal Features 行动比(通过)话语更响亮:智能手机用户通过深度时间特征的被动认证
Pub Date : 2019-01-16 DOI: 10.1109/ICB45273.2019.8987433
Debayan Deb, A. Ross, Anil K. Jain, K. Prakah-Asante, K. Prasad
Prevailing user authentication schemes on smartphones rely on explicit user interaction, where a user types in a passcode or presents a biometric cue such as face, fingerprint, or iris. In addition to being cumbersome and obtrusive to the users, such authentication mechanisms pose security and privacy concerns. Passive authentication systems can tackle these challenges by unobtrusively monitoring the user’s interaction with the device. We propose a Siamese Long Short-Term Memory (LSTM) network architecture for passive authentication, where users can be verified without requiring any explicit authentication step. On a dataset comprising of measurements from 30 smartphone sensor modalities for 37 users, we evaluate our approach on 8 dominant modalities, namely, keystroke dynamics, GPS location, accelerometer, gyroscope, magnetometer, linear accelerometer, gravity, and rotation sensors. Experimental results find that a genuine user can be correctly verified 96.47% a false accept rate of 0.1% within 3 seconds.
智能手机上流行的用户认证方案依赖于明确的用户交互,其中用户输入密码或提供生物识别提示,如面部、指纹或虹膜。这种身份验证机制除了对用户来说很麻烦和突兀之外,还会带来安全和隐私问题。被动身份验证系统可以通过不引人注目地监视用户与设备的交互来解决这些挑战。我们提出了一种用于被动身份验证的暹罗长短期记忆(LSTM)网络架构,其中用户可以在不需要任何显式身份验证步骤的情况下进行验证。在包含来自37个用户的30个智能手机传感器模式的测量数据集上,我们评估了我们在8个主要模式上的方法,即击键动力学、GPS定位、加速度计、陀螺仪、磁力计、线性加速度计、重力和旋转传感器。实验结果表明,3秒内正确率为96.47%,误接受率为0.1%。
{"title":"Actions Speak Louder Than (Pass)words: Passive Authentication of Smartphone* Users via Deep Temporal Features","authors":"Debayan Deb, A. Ross, Anil K. Jain, K. Prakah-Asante, K. Prasad","doi":"10.1109/ICB45273.2019.8987433","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987433","url":null,"abstract":"Prevailing user authentication schemes on smartphones rely on explicit user interaction, where a user types in a passcode or presents a biometric cue such as face, fingerprint, or iris. In addition to being cumbersome and obtrusive to the users, such authentication mechanisms pose security and privacy concerns. Passive authentication systems can tackle these challenges by unobtrusively monitoring the user’s interaction with the device. We propose a Siamese Long Short-Term Memory (LSTM) network architecture for passive authentication, where users can be verified without requiring any explicit authentication step. On a dataset comprising of measurements from 30 smartphone sensor modalities for 37 users, we evaluate our approach on 8 dominant modalities, namely, keystroke dynamics, GPS location, accelerometer, gyroscope, magnetometer, linear accelerometer, gravity, and rotation sensors. Experimental results find that a genuine user can be correctly verified 96.47% a false accept rate of 0.1% within 3 seconds.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Generalizing Fingerprint Spoof Detector: Learning a One-Class Classifier 泛化指纹欺骗检测器:学习一类分类器
Pub Date : 2019-01-13 DOI: 10.1109/ICB45273.2019.8987319
Joshua J. Engelsma, Anil K. Jain
Prevailing fingerprint recognition systems are vulnerable to spoof attacks. To mitigate these attacks, automated spoof detectors are trained to distinguish a set of live or bona fide fingerprints from a set of known spoof fingerprints. Despite their success, spoof detectors remain vulnerable when exposed to attacks from spoofs made with materials not seen during training of the detector. To alleviate this shortcoming, we approach spoof detection as a one-class classification problem. The goal is to train a spoof detector on only the live fingerprints such that once the concept of "live" has been learned, spoofs of any material can be rejected. We accomplish this through training multiple generative adversarial networks (GANS) on live fingerprint images acquired with the open source, dual-camera, 1900 ppi RaspiReader fingerprint reader. Our experimental results, conducted on 5.5K spoof images (from 12 materials) and 11.8K live images show that the proposed approach improves the cross-material spoof detection performance over state-of-the-art one-class and binary class spoof detectors on 11 of 12 testing materials and 7 of 12 testing materials, respectively.
常用的指纹识别系统容易受到欺骗攻击。为了减轻这些攻击,对自动欺骗检测器进行了训练,以区分一组活指纹或真实指纹和一组已知的欺骗指纹。尽管他们取得了成功,但欺骗探测器在暴露于用探测器训练期间未见过的材料制成的欺骗攻击时仍然容易受到攻击。为了减轻这个缺点,我们将欺骗检测作为一个单类分类问题来处理。我们的目标是训练一个欺骗检测器,使其只对真实指纹进行训练,这样,一旦了解了“真实”的概念,任何材料的欺骗都可以被拒绝。我们通过训练多个生成对抗网络(GANS)来实现这一目标,GANS是通过开源的双摄像头、1900 ppi RaspiReader指纹识别器获取的实时指纹图像来实现的。我们在5.5K欺骗图像(来自12种材料)和11.8K实时图像上进行的实验结果表明,所提出的方法分别在12种测试材料中的11种和12种测试材料中的7种上提高了最先进的一类和二元欺骗检测器的跨材料欺骗检测性能。
{"title":"Generalizing Fingerprint Spoof Detector: Learning a One-Class Classifier","authors":"Joshua J. Engelsma, Anil K. Jain","doi":"10.1109/ICB45273.2019.8987319","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987319","url":null,"abstract":"Prevailing fingerprint recognition systems are vulnerable to spoof attacks. To mitigate these attacks, automated spoof detectors are trained to distinguish a set of live or bona fide fingerprints from a set of known spoof fingerprints. Despite their success, spoof detectors remain vulnerable when exposed to attacks from spoofs made with materials not seen during training of the detector. To alleviate this shortcoming, we approach spoof detection as a one-class classification problem. The goal is to train a spoof detector on only the live fingerprints such that once the concept of \"live\" has been learned, spoofs of any material can be rejected. We accomplish this through training multiple generative adversarial networks (GANS) on live fingerprint images acquired with the open source, dual-camera, 1900 ppi RaspiReader fingerprint reader. Our experimental results, conducted on 5.5K spoof images (from 12 materials) and 11.8K live images show that the proposed approach improves the cross-material spoof detection performance over state-of-the-art one-class and binary class spoof detectors on 11 of 12 testing materials and 7 of 12 testing materials, respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126811186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Learning-Free Iris Segmentation Revisited: A First Step Toward Fast Volumetric Operation Over Video Samples 重新访问无学习虹膜分割:对视频样本快速体积操作的第一步
Pub Date : 2019-01-06 DOI: 10.1109/ICB45273.2019.8987377
Jeffery Kinnison, Mateusz Trokielewicz, Camila Carballo, A. Czajka, W. Scheirer
Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject’s pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman’s (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems.
虹膜生物识别中的主体匹配性能取决于快速、高质量的虹膜分割。在很多情况下,虹膜生物识别采集设备会按顺序采集多张图像,并结合每张图像的分割和匹配结果来加强结果。到目前为止,分割已经发生在2D中,对每个图像单独操作。但是,这种方法虽然功能强大,但没有利用将连续图像视为体积数据所带来的潜在性能增益。作为这个方向的第一步,我们将神经体积的灵活无学习重构框架(FLoRIN)应用于虹膜视频的体积分割。FLoRIN是一个开源的分割和重构框架,最初是为神经显微镜体积设计的。此外,我们引入了一个新颖的近红外虹膜视频数据集,其中每个受试者的瞳孔由于可见光刺激而迅速改变大小,作为FLoRIN的测试平台。我们比较了FLoRIN、基于深度学习的(SegNet)和道格曼(OSIRIS)虹膜分割方法生成的虹膜掩膜的匹配性能。我们表明,通过整合体积信息,FLoRIN实现了3.6到一个数量级的吞吐量增加,而主题匹配性能仅略有下降。我们还证明了基于florin的虹膜分割在低资源硬件上保持这种加速,使其适合嵌入式生物识别系统。
{"title":"Learning-Free Iris Segmentation Revisited: A First Step Toward Fast Volumetric Operation Over Video Samples","authors":"Jeffery Kinnison, Mateusz Trokielewicz, Camila Carballo, A. Czajka, W. Scheirer","doi":"10.1109/ICB45273.2019.8987377","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987377","url":null,"abstract":"Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject’s pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman’s (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126495708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Iris Recognition with Image Segmentation Employing Retrained Off-the-Shelf Deep Neural Networks 虹膜识别与图像分割采用再训练现成的深度神经网络
Pub Date : 2019-01-04 DOI: 10.1109/ICB45273.2019.8987299
Daniel Kerrigan, Mateusz Trokielewicz, A. Czajka, K. Bowyer
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman’s based segmentation.
本文提出了三种新的、开源的、基于深度学习的虹膜分割方法,以及如何在传统的基于gabor小波的虹膜识别中使用不规则分割掩模的方法。为了训练和验证方法,我们使用了不同团队和不同传感器获得的广泛的虹膜图像,包括CASIA-Iris-Interval-v4、BioSec、ND-Iris-0405、UBIRIS、war - bibase - post-mortem - iris v2.0(死后虹膜图像)和ND-TWINS-2009-2010(从同卵双胞胎获得的虹膜图像)的数据。这种不同的训练数据应该增加所提出的分割技术的泛化能力。在数据库不相交的训练和测试中,我们发现基于深度学习的分割在获得的结果和人工标注的ground-truth之间计算的交集与联合方面优于传统的(OSIRIS)分割。有趣的是,当使用基于深度学习的分割时,基于gabor的虹膜匹配并不总是更好,并且与使用基于Daugman的分割的方法相当。
{"title":"Iris Recognition with Image Segmentation Employing Retrained Off-the-Shelf Deep Neural Networks","authors":"Daniel Kerrigan, Mateusz Trokielewicz, A. Czajka, K. Bowyer","doi":"10.1109/ICB45273.2019.8987299","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987299","url":null,"abstract":"This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman’s based segmentation.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2019 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1