首页 > 最新文献

2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Robust face presentation attack detection on smartphones : An approach based on variable focus 基于可变焦点的智能手机鲁棒人脸呈现攻击检测方法
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272753
K. Raja, P. Wasnik, Ramachandra Raghavendra, C. Busch
Smartphone based facial biometric systems have been well used in many of the security applications starting from simple phone unlocking to secure banking applications. This work presents a new approach of exploring the intrinsic characteristics of the smartphone camera to capture a number of stack images in the depth-of-field. With the set of stack images obtained, we present a new feature-free and classifier-free approach to provide the presentation attack resistant face biometric system. With the entire system implemented on the smartphone, we demonstrate the applicability of the proposed scheme in obtaining a stack of images with varying focus to effectively determine the presentation attacks. We create a new database of 13250 images at different focal length to present a detailed analysis of vulnerability together with the evaluation of proposed scheme. An extensive evaluation of the newly created database comprising of 5 different Presentation Attack Instruments (PAI) has demonstrated an outstanding performance on all 5 PAI through proposed approach. With the set ofcomplementary benefits of proposed approach illustrated in this work, we deduce the robustness towards unseen 2D attacks.
基于智能手机的面部生物识别系统已经广泛应用于许多安全应用,从简单的手机解锁到安全的银行应用。本文提出了一种探索智能手机相机内在特征的新方法,用于在景深上捕获大量堆叠图像。在此基础上,我们提出了一种新的无特征和无分类器的方法来提供抗攻击的人脸生物识别系统。通过在智能手机上实现整个系统,我们证明了所提出的方案在获取不同焦点的图像堆栈以有效确定呈现攻击方面的适用性。我们创建了一个包含13250张不同焦距图像的新数据库,以详细分析漏洞并对所提出的方案进行评估。通过对包含5种不同呈现攻击工具(PAI)的新创建数据库的广泛评估,表明该方法在所有5种PAI上都具有出色的性能。通过本文所述方法的互补优势,我们推断了对不可见的2D攻击的鲁棒性。
{"title":"Robust face presentation attack detection on smartphones : An approach based on variable focus","authors":"K. Raja, P. Wasnik, Ramachandra Raghavendra, C. Busch","doi":"10.1109/BTAS.2017.8272753","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272753","url":null,"abstract":"Smartphone based facial biometric systems have been well used in many of the security applications starting from simple phone unlocking to secure banking applications. This work presents a new approach of exploring the intrinsic characteristics of the smartphone camera to capture a number of stack images in the depth-of-field. With the set of stack images obtained, we present a new feature-free and classifier-free approach to provide the presentation attack resistant face biometric system. With the entire system implemented on the smartphone, we demonstrate the applicability of the proposed scheme in obtaining a stack of images with varying focus to effectively determine the presentation attacks. We create a new database of 13250 images at different focal length to present a detailed analysis of vulnerability together with the evaluation of proposed scheme. An extensive evaluation of the newly created database comprising of 5 different Presentation Attack Instruments (PAI) has demonstrated an outstanding performance on all 5 PAI through proposed approach. With the set ofcomplementary benefits of proposed approach illustrated in this work, we deduce the robustness towards unseen 2D attacks.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121087894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using associative classification to authenticate mobile device users 使用关联分类验证移动设备用户
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272684
T. Neal, D. Woodard
Because passwords and personal identification numbers are easily forgotten, stolen, or reused on multiple accounts, the current norm for mobile device security is quickly becoming inefficient and inconvenient. Thus, manufacturers have worked to make physiological biometrics accessible to mobile device owners as improved security measures. While behavioral biometrics has yet to receive commercial attention, researchers have continued to consider these approaches as well. However, studies of interactive data are limited, and efforts which are aimed at improving the performance of such techniques remain relevant. Thus, this paper provides a performance analysis of application, Bluetooth, and Wi-Fi data collected from 189 subjects on a mobile device for user verification. Results indicate that user authentication can be achieved with up to 91% accuracy, demonstrating the effectiveness of associative classification as a feature extraction technique.
由于密码和个人识别码很容易被遗忘、被盗或在多个账户上重复使用,目前的移动设备安全规范正迅速变得低效和不方便。因此,制造商一直在努力使移动设备所有者能够使用生理生物识别技术,作为改进的安全措施。虽然行为生物识别技术尚未得到商业关注,但研究人员也在继续考虑这些方法。然而,对交互数据的研究是有限的,旨在提高这种技术性能的努力仍然是相关的。因此,本文提供了从移动设备上收集的189名受试者的应用程序,蓝牙和Wi-Fi数据的性能分析,以供用户验证。结果表明,用户身份验证的准确率高达91%,证明了关联分类作为一种特征提取技术的有效性。
{"title":"Using associative classification to authenticate mobile device users","authors":"T. Neal, D. Woodard","doi":"10.1109/BTAS.2017.8272684","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272684","url":null,"abstract":"Because passwords and personal identification numbers are easily forgotten, stolen, or reused on multiple accounts, the current norm for mobile device security is quickly becoming inefficient and inconvenient. Thus, manufacturers have worked to make physiological biometrics accessible to mobile device owners as improved security measures. While behavioral biometrics has yet to receive commercial attention, researchers have continued to consider these approaches as well. However, studies of interactive data are limited, and efforts which are aimed at improving the performance of such techniques remain relevant. Thus, this paper provides a performance analysis of application, Bluetooth, and Wi-Fi data collected from 189 subjects on a mobile device for user verification. Results indicate that user authentication can be achieved with up to 91% accuracy, demonstrating the effectiveness of associative classification as a feature extraction technique.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"43 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124969286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Shared dataset on natural human-computer interaction to support continuous authentication research 自然人机交互共享数据集,支持持续认证研究
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272738
Chris Murphy, Jiaju Huang, Daqing Hou, S. Schuckers
Conventional one-stop authentication of a computer terminal takes place at a user's initial sign-on. In contrast, continuous authentication protects against the case where an intruder takes over an authenticated terminal or simply has access to sign-on credentials. Behavioral biometrics has had some success in providing continuous authentication without requiring additional hardware. However, further advancement requires benchmarking existing algorithms against large, shared datasets. To this end, we provide a novel large dataset that captures not only keystrokes, but also mouse events and active programs. Our dataset is collected using passive logging software to monitor user interactions with the mouse, keyboard, and software programs. Data was collected from 103 users in a completely uncontrolled, natural setting, over a time span of 2.5 years. We apply Gunetti & Picardi's algorithm, a state-of-the-art algorithm in free text keystroke dynamics, as an initial benchmarkfor the new dataset.
计算机终端的传统一站式认证在用户首次登录时进行。相反,连续身份验证可以防止入侵者接管经过身份验证的终端或仅仅访问登录凭据。行为生物识别技术在不需要额外硬件的情况下提供连续身份验证方面取得了一些成功。然而,进一步的进步需要针对大型共享数据集对现有算法进行基准测试。为此,我们提供了一个新的大型数据集,不仅可以捕获击键,还可以捕获鼠标事件和活动程序。我们的数据集是使用被动日志软件收集的,用于监视用户与鼠标、键盘和软件程序的交互。在2.5年的时间里,在完全不受控制的自然环境中收集了103名用户的数据。我们应用Gunetti & Picardi算法,这是一种最先进的自由文本击键动力学算法,作为新数据集的初始基准。
{"title":"Shared dataset on natural human-computer interaction to support continuous authentication research","authors":"Chris Murphy, Jiaju Huang, Daqing Hou, S. Schuckers","doi":"10.1109/BTAS.2017.8272738","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272738","url":null,"abstract":"Conventional one-stop authentication of a computer terminal takes place at a user's initial sign-on. In contrast, continuous authentication protects against the case where an intruder takes over an authenticated terminal or simply has access to sign-on credentials. Behavioral biometrics has had some success in providing continuous authentication without requiring additional hardware. However, further advancement requires benchmarking existing algorithms against large, shared datasets. To this end, we provide a novel large dataset that captures not only keystrokes, but also mouse events and active programs. Our dataset is collected using passive logging software to monitor user interactions with the mouse, keyboard, and software programs. Data was collected from 103 users in a completely uncontrolled, natural setting, over a time span of 2.5 years. We apply Gunetti & Picardi's algorithm, a state-of-the-art algorithm in free text keystroke dynamics, as an initial benchmarkfor the new dataset.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127930750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Towards open-set face recognition using hashing functions 利用哈希函数实现开集人脸识别
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272751
R. H. Vareto, Samira Silva, F. Costa, W. R. Schwartz
Face Recognition is one of the most relevant problems in computer vision as we consider its importance to areas such as surveillance, forensics and psychology. Furthermore, open-set face recognition has a large room for improvement since only few researchers have focused on it. In fact, a real-world recognition system has to cope with several unseen individuals and determine whether a given face image is associated with a subject registered in a gallery of known individuals. In this work, we combine hashing functions and classification methods to estimate when probe samples are known (i.e., belong to the gallery set). We carry out experiments with partial least squares and neural networks and show how response value histograms tend to behave for known and unknown individuals whenever we test a probe sample. In addition, we conduct experiments on FRGCv1, PubFig83 and VGGFace to show that our method continues effective regardless of the dataset difficulty.
人脸识别是计算机视觉中最相关的问题之一,因为我们认为它对监视、法医学和心理学等领域都很重要。此外,由于研究人员较少,开放集人脸识别还有很大的改进空间。事实上,现实世界的识别系统必须处理几个看不见的个体,并确定给定的人脸图像是否与已知个体库中注册的对象相关联。在这项工作中,我们结合了哈希函数和分类方法来估计探针样本何时已知(即属于画廊集)。我们用偏最小二乘法和神经网络进行了实验,并展示了每当我们测试探测样本时,响应值直方图对已知和未知个体的行为倾向。此外,我们在FRGCv1、PubFig83和VGGFace上进行了实验,表明无论数据集难度如何,我们的方法都是有效的。
{"title":"Towards open-set face recognition using hashing functions","authors":"R. H. Vareto, Samira Silva, F. Costa, W. R. Schwartz","doi":"10.1109/BTAS.2017.8272751","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272751","url":null,"abstract":"Face Recognition is one of the most relevant problems in computer vision as we consider its importance to areas such as surveillance, forensics and psychology. Furthermore, open-set face recognition has a large room for improvement since only few researchers have focused on it. In fact, a real-world recognition system has to cope with several unseen individuals and determine whether a given face image is associated with a subject registered in a gallery of known individuals. In this work, we combine hashing functions and classification methods to estimate when probe samples are known (i.e., belong to the gallery set). We carry out experiments with partial least squares and neural networks and show how response value histograms tend to behave for known and unknown individuals whenever we test a probe sample. In addition, we conduct experiments on FRGCv1, PubFig83 and VGGFace to show that our method continues effective regardless of the dataset difficulty.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133316512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
ICFVR 2017: 3rd international competition on finger vein recognition ICFVR 2017:第三届手指静脉识别国际比赛
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272760
Yi Zhang, Houjun Huang, Haifeng Zhang, Liao Ni, W. Xu, N. U. Ahmed, Md. Shakil Ahmed, Yilun Jin, Ying Chen, Jingxuan Wen, Wenxin Li
In recent years, finger vein recognition has become an important sub-field in biometrics and been applied to real-world applications. The development of finger vein recognition algorithms heavily depends on large-scale real-world data sets. In order to motivate research on finger vein recognition, we released the largest finger vein data set up to now and hold finger vein recognition competitions based on our data set every year. In 2017, International Competition on Finger Vein Recognition (ICFVR) is held jointly with IJCB 2017. 11 teams registered and 10 of them joined the final evaluation. The winner of this year dramatically improved the EER from 2.64% to 0.483% compared to the 'winner of last year. In this paper, we introduce the process and results of ICFVR 2017 and give insights on development of state-of-art finger vein recognition algorithms.
近年来,手指静脉识别已成为生物识别领域的一个重要分支,并开始应用于现实世界。手指静脉识别算法的发展严重依赖于大规模的真实世界数据集。为了激发对手指静脉识别的研究,我们发布了迄今为止最大的手指静脉数据集,并每年基于我们的数据集举办手指静脉识别比赛。2017年与IJCB 2017联合举办国际手指静脉识别大赛(ICFVR)。11个团队报名,其中10个团队参加最终评审。与去年的冠军相比,今年的冠军将EER从2.64%大幅提高到0.483%。在本文中,我们介绍了ICFVR 2017的过程和结果,并对最先进的手指静脉识别算法的发展提出了见解。
{"title":"ICFVR 2017: 3rd international competition on finger vein recognition","authors":"Yi Zhang, Houjun Huang, Haifeng Zhang, Liao Ni, W. Xu, N. U. Ahmed, Md. Shakil Ahmed, Yilun Jin, Ying Chen, Jingxuan Wen, Wenxin Li","doi":"10.1109/BTAS.2017.8272760","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272760","url":null,"abstract":"In recent years, finger vein recognition has become an important sub-field in biometrics and been applied to real-world applications. The development of finger vein recognition algorithms heavily depends on large-scale real-world data sets. In order to motivate research on finger vein recognition, we released the largest finger vein data set up to now and hold finger vein recognition competitions based on our data set every year. In 2017, International Competition on Finger Vein Recognition (ICFVR) is held jointly with IJCB 2017. 11 teams registered and 10 of them joined the final evaluation. The winner of this year dramatically improved the EER from 2.64% to 0.483% compared to the 'winner of last year. In this paper, we introduce the process and results of ICFVR 2017 and give insights on development of state-of-art finger vein recognition algorithms.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124421728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Extracting sub-glottal and Supra-glottal features from MFCC using convolutional neural networks for speaker identification in degraded audio signals 利用卷积神经网络提取声门下和上声门特征,用于退化音频信号的说话人识别
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272748
Anurag Chowdhury, A. Ross
We present a deep learning based algorithm for speaker recognition from degraded audio signals. We use the commonly employed Mel-Frequency Cepstral Coefficients (MFCC) for representing the audio signals. A convolutional neural network (CNN) based on 1D filters, rather than 2D filters, is then designed. The filters in the CNN are designed to learn inter-dependency between cepstral coefficients extracted from audio frames of fixed temporal expanse. Our approach aims at extracting speaker dependent features, like Sub-glottal and Supra-glottal features, of the human speech production apparatus for identifying speakers from degraded audio signals. The performance of the proposed method is compared against existing baseline schemes on both synthetically and naturally corrupted speech data. Experiments convey the efficacy of the proposed architecture for speaker recognition.
我们提出了一种基于深度学习的从退化音频信号中识别说话人的算法。我们使用常用的Mel-Frequency倒谱系数(MFCC)来表示音频信号。然后设计了一个基于一维滤波器而不是二维滤波器的卷积神经网络(CNN)。CNN中的滤波器被设计用来学习从固定时间跨度的音频帧中提取的倒谱系数之间的相互依赖性。我们的方法旨在提取人类语音产生装置的说话人相关特征,如声门下和声门上特征,用于从降级的音频信号中识别说话人。在合成和自然损坏语音数据上,将该方法与现有的基线算法进行了性能比较。实验结果表明,所提出的结构对说话人识别是有效的。
{"title":"Extracting sub-glottal and Supra-glottal features from MFCC using convolutional neural networks for speaker identification in degraded audio signals","authors":"Anurag Chowdhury, A. Ross","doi":"10.1109/BTAS.2017.8272748","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272748","url":null,"abstract":"We present a deep learning based algorithm for speaker recognition from degraded audio signals. We use the commonly employed Mel-Frequency Cepstral Coefficients (MFCC) for representing the audio signals. A convolutional neural network (CNN) based on 1D filters, rather than 2D filters, is then designed. The filters in the CNN are designed to learn inter-dependency between cepstral coefficients extracted from audio frames of fixed temporal expanse. Our approach aims at extracting speaker dependent features, like Sub-glottal and Supra-glottal features, of the human speech production apparatus for identifying speakers from degraded audio signals. The performance of the proposed method is compared against existing baseline schemes on both synthetically and naturally corrupted speech data. Experiments convey the efficacy of the proposed architecture for speaker recognition.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132225129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
In defense of low-level structural features and SVMs for facial attribute classification: Application to detection of eye state, Mouth State, and eyeglasses in the wild 基于低级结构特征和支持向量机的面部属性分类:在野外眼睛状态、嘴巴状态和眼镜状态检测中的应用
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272747
Abdulaziz Alorf, A. L. Abbott
The current trend in image analysis is to employ automatically detected feature types, such as those obtained using deep-learning techniques. For some applications, however, manually crafted features such as Histogram of Oriented Gradients (HOG) continue to yield better performance in demanding situations. This paper considers both approaches for the problem of facial attribute classification, for images obtained “in the wild.” Attributes of particular interest are eye state (open/closed), mouth state (open/closed), and eyeglasses (present/absent). We present a full face-processing pipeline that employs conventional machine learning techniques, from detection to attribute classification. Experimental results have indicated better performance using RootSIFT with a conventional support-vector machine (SVM) approach, as compared to deep-learning approaches that have been reported in the literature. Our proposed open/closed eye classifier has yielded an accuracy of 99.3% on the CEW dataset, and an accuracy of 98.7% on the ZJU dataset. Similarly, our proposed open/closed mouth classifier has achieved performance similar to deep learning. Also, our proposed presence/absence eyeglasses classifier delivered very good performance, being the best method on LFWA, and second best for the CelebA dataset. The system reported here runs at 30 fps on HD-sized video using a CPU-only implementation.
目前图像分析的趋势是采用自动检测的特征类型,例如使用深度学习技术获得的特征类型。然而,对于某些应用程序,手动制作的特征,如定向梯度直方图(HOG),在要求苛刻的情况下继续产生更好的性能。本文考虑了这两种方法的面部属性分类问题,对于“在野外”获得的图像。特别感兴趣的属性是眼睛状态(张开/闭上),嘴巴状态(张开/闭上)和眼镜(在场/不在场)。我们提出了一个完整的人脸处理管道,采用传统的机器学习技术,从检测到属性分类。实验结果表明,与文献中报道的深度学习方法相比,使用传统支持向量机(SVM)方法的RootSIFT具有更好的性能。我们提出的睁眼/闭眼分类器在CEW数据集上的准确率为99.3%,在ZJU数据集上的准确率为98.7%。同样,我们提出的开/闭口分类器也取得了与深度学习相似的性能。此外,我们提出的存在/缺席眼镜分类器提供了非常好的性能,是LFWA上最好的方法,对于CelebA数据集来说是第二好的方法。这里报告的系统在使用仅cpu实现的高清视频上以30 fps的速度运行。
{"title":"In defense of low-level structural features and SVMs for facial attribute classification: Application to detection of eye state, Mouth State, and eyeglasses in the wild","authors":"Abdulaziz Alorf, A. L. Abbott","doi":"10.1109/BTAS.2017.8272747","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272747","url":null,"abstract":"The current trend in image analysis is to employ automatically detected feature types, such as those obtained using deep-learning techniques. For some applications, however, manually crafted features such as Histogram of Oriented Gradients (HOG) continue to yield better performance in demanding situations. This paper considers both approaches for the problem of facial attribute classification, for images obtained “in the wild.” Attributes of particular interest are eye state (open/closed), mouth state (open/closed), and eyeglasses (present/absent). We present a full face-processing pipeline that employs conventional machine learning techniques, from detection to attribute classification. Experimental results have indicated better performance using RootSIFT with a conventional support-vector machine (SVM) approach, as compared to deep-learning approaches that have been reported in the literature. Our proposed open/closed eye classifier has yielded an accuracy of 99.3% on the CEW dataset, and an accuracy of 98.7% on the ZJU dataset. Similarly, our proposed open/closed mouth classifier has achieved performance similar to deep learning. Also, our proposed presence/absence eyeglasses classifier delivered very good performance, being the best method on LFWA, and second best for the CelebA dataset. The system reported here runs at 30 fps on HD-sized video using a CPU-only implementation.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132609026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards pre-alignment of near-infrared iris images 近红外虹膜图像的预对准
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272718
P. Drozdowski, C. Rathgeb, H. Hofbauer, J. Wagner, A. Uhl, C. Busch
The necessity of biometric template alignment imposes a significant computational load and increases the probability of false positive occurrences in biometric systems. While for some modalities, automatic pre-alignment of biometric samples is utilised, this topic has not yet been explored for systems based on the iris. This paper presents a method for pre-alignment of iris images based on the positions ofautomatically detected eye corners. Existing work in the area of automatic eye corner detection has hitherto only involved visible wavelength images; for the near-infrared images, used in the vast majority of current iris recognition systems, this task is significantly more challenging and as of yet unexplored. A comparative study of two methods for solving this problem is presented in this paper. The eye corners detected by the two methods are then used for the pre-alignment and biometric performance evaluation experiments. The system utilising image pre-alignment is benchmarked against a baseline iris recognition system on the iris subset of the BioSecure database. In the benchmark, the workload associated with alignment compensation is significantly reduced, while the biometric performance remains unchanged or even improves slightly.
生物识别模板对齐的必要性施加了显著的计算负荷,并增加了假阳性发生在生物识别系统的概率。虽然对于某些模式,生物特征样本的自动预校准被利用,但这个主题尚未被探索用于基于虹膜的系统。本文提出了一种基于自动检测的眼角位置对虹膜图像进行预对准的方法。目前在自动眼角检测领域的工作只涉及可见光波长图像;对于目前绝大多数虹膜识别系统中使用的近红外图像,这项任务更具挑战性,而且尚未被探索。本文对解决这一问题的两种方法进行了比较研究。然后将两种方法检测到的眼角用于预对准和生物特征性能评估实验。利用图像预校准的系统在BioSecure数据库的虹膜子集上对基线虹膜识别系统进行基准测试。在基准测试中,与校准补偿相关的工作量显著减少,而生物识别性能保持不变甚至略有提高。
{"title":"Towards pre-alignment of near-infrared iris images","authors":"P. Drozdowski, C. Rathgeb, H. Hofbauer, J. Wagner, A. Uhl, C. Busch","doi":"10.1109/BTAS.2017.8272718","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272718","url":null,"abstract":"The necessity of biometric template alignment imposes a significant computational load and increases the probability of false positive occurrences in biometric systems. While for some modalities, automatic pre-alignment of biometric samples is utilised, this topic has not yet been explored for systems based on the iris. This paper presents a method for pre-alignment of iris images based on the positions ofautomatically detected eye corners. Existing work in the area of automatic eye corner detection has hitherto only involved visible wavelength images; for the near-infrared images, used in the vast majority of current iris recognition systems, this task is significantly more challenging and as of yet unexplored. A comparative study of two methods for solving this problem is presented in this paper. The eye corners detected by the two methods are then used for the pre-alignment and biometric performance evaluation experiments. The system utilising image pre-alignment is benchmarked against a baseline iris recognition system on the iris subset of the BioSecure database. In the benchmark, the workload associated with alignment compensation is significantly reduced, while the biometric performance remains unchanged or even improves slightly.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114911039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Finger vein image retrieval via affinity-preserving K-means hashing 基于亲和保持k均值哈希的手指静脉图像检索
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272720
Kun Su, Gongping Yang, Lu Yang, Yilong Yin
Efficient identification of finger veins is still a challenging problem due to the increasing size of the finger vein database. Most leading finger vein image identification methods have high-dimensional real-valued features, which result in extremely high computation complexity. Hashing algorithms are extraordinary effective ways to facilitate finger vein image retrieval. Therefore, in this paper, we proposed a finger vein image retrieval scheme based on Affinity-Preserving K-means Hashing (APKMH) algorithm and bag of subspaces based image feature. At first, we represent finger vein image by Nonlinearly Sub-space Coding (NSC) method which can obtain the discriminative finger vein image features. Then the features space is partitioned into multiple subsegments. In each subsegment, we employ the APKMH algorithm, which can simultaneously construct the visual codebook by directly k-means clustering and encode the feature vector as the binary index of the codeword. Experimental results on a large fused finger vein dataset demonstrate that our hashing method outperforms the state-of-the-art finger vein retrieval methods.
由于手指静脉数据库的不断扩大,有效识别手指静脉仍然是一个具有挑战性的问题。现有的指静脉图像识别方法大多具有高维实值特征,计算复杂度极高。哈希算法是一种非常有效的手指静脉图像检索方法。为此,本文提出了一种基于亲和性保持k均值哈希(Affinity-Preserving K-means hash, APKMH)算法和基于子空间特征包的指静脉图像检索方案。首先,采用非线性子空间编码(NSC)方法对手指静脉图像进行表征,得到具有判别性的手指静脉图像特征。然后将特征空间划分为多个子段。在每个子段中,我们采用了APKMH算法,该算法可以同时通过直接k-means聚类构建视觉码本,并将特征向量编码为码字的二进制索引。在一个大型手指静脉数据集上的实验结果表明,我们的哈希方法优于目前最先进的手指静脉检索方法。
{"title":"Finger vein image retrieval via affinity-preserving K-means hashing","authors":"Kun Su, Gongping Yang, Lu Yang, Yilong Yin","doi":"10.1109/BTAS.2017.8272720","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272720","url":null,"abstract":"Efficient identification of finger veins is still a challenging problem due to the increasing size of the finger vein database. Most leading finger vein image identification methods have high-dimensional real-valued features, which result in extremely high computation complexity. Hashing algorithms are extraordinary effective ways to facilitate finger vein image retrieval. Therefore, in this paper, we proposed a finger vein image retrieval scheme based on Affinity-Preserving K-means Hashing (APKMH) algorithm and bag of subspaces based image feature. At first, we represent finger vein image by Nonlinearly Sub-space Coding (NSC) method which can obtain the discriminative finger vein image features. Then the features space is partitioned into multiple subsegments. In each subsegment, we employ the APKMH algorithm, which can simultaneously construct the visual codebook by directly k-means clustering and encode the feature vector as the binary index of the codeword. Experimental results on a large fused finger vein dataset demonstrate that our hashing method outperforms the state-of-the-art finger vein retrieval methods.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Location-sensitive sparse representation of deep normal patterns for expression-robust 3D face recognition 表达鲁棒3D人脸识别中深度正常模式的位置敏感稀疏表示
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272703
Huibin Li, Jian Sun, Liming Chen
This paper presents a straight-forward yet efficient, and expression-robust 3D face recognition approach by exploring location sensitive sparse representation of deep normal patterns (DNP). In particular, given raw 3D facial surfaces, we first run 3D face pre-processing pipeline, including nose tip detection, face region cropping, and pose normalization. The 3D coordinates of each normalized 3D facial surface are then projected into 2D plane to generate geometry images, from which three images of facial surface normal components are estimated. Each normal image is then fed into a pre-trained deep face net to generate deep representations of facial surface normals, i.e., deep normal patterns. Considering the importance of different facial locations, we propose a location sensitive sparse representation classifier (LS-SRC) for similarity measure among deep normal patterns associated with different 3D faces. Finally, simple score-level fusion of different normal components are used for the final decision. The proposed approach achieves significantly high performance, and reporting rank-one scores of 98.01%, 97.60%, and 96.13% on the FRGC v2.0, Bosphorus, and BU-3DFE databases when only one sample per subject is used in the gallery. These experimental results reveals that the performance of 3D face recognition would be constantly improved with the aid of training deep models from massive 2D face images, which opens the door for future directions of 3D face recognition.
本文通过探索深度正常模式(deep normal patterns, DNP)的位置敏感稀疏表示,提出了一种简单、高效、表达鲁棒的3D人脸识别方法。特别是,给定原始的3D面部表面,我们首先运行3D面部预处理管道,包括鼻尖检测,面部区域裁剪和姿态归一化。然后将每个归一化的三维人脸表面的三维坐标投影到二维平面上生成几何图像,从中估计出人脸表面法线分量的三幅图像。然后将每个法线图像馈送到预训练的深度人脸网络中,以生成面部表面法线的深度表示,即深度法线模式。考虑到不同人脸位置的重要性,我们提出了一种位置敏感的稀疏表示分类器(LS-SRC),用于测量与不同3D人脸相关的深度法向模式之间的相似性。最后,使用不同正常分量的简单分数级融合进行最终判定。该方法取得了显著的高性能,当图库中每个受试者仅使用一个样本时,在FRGC v2.0、Bosphorus和BU-3DFE数据库上的排名得分分别为98.01%、97.60%和96.13%。这些实验结果表明,通过对大量二维人脸图像进行深度模型训练,可以不断提高三维人脸识别的性能,为未来的三维人脸识别方向打开了大门。
{"title":"Location-sensitive sparse representation of deep normal patterns for expression-robust 3D face recognition","authors":"Huibin Li, Jian Sun, Liming Chen","doi":"10.1109/BTAS.2017.8272703","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272703","url":null,"abstract":"This paper presents a straight-forward yet efficient, and expression-robust 3D face recognition approach by exploring location sensitive sparse representation of deep normal patterns (DNP). In particular, given raw 3D facial surfaces, we first run 3D face pre-processing pipeline, including nose tip detection, face region cropping, and pose normalization. The 3D coordinates of each normalized 3D facial surface are then projected into 2D plane to generate geometry images, from which three images of facial surface normal components are estimated. Each normal image is then fed into a pre-trained deep face net to generate deep representations of facial surface normals, i.e., deep normal patterns. Considering the importance of different facial locations, we propose a location sensitive sparse representation classifier (LS-SRC) for similarity measure among deep normal patterns associated with different 3D faces. Finally, simple score-level fusion of different normal components are used for the final decision. The proposed approach achieves significantly high performance, and reporting rank-one scores of 98.01%, 97.60%, and 96.13% on the FRGC v2.0, Bosphorus, and BU-3DFE databases when only one sample per subject is used in the gallery. These experimental results reveals that the performance of 3D face recognition would be constantly improved with the aid of training deep models from massive 2D face images, which opens the door for future directions of 3D face recognition.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128974683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2017 IEEE International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1