首页 > 最新文献

2018 International Conference on Biometrics (ICB)最新文献

英文 中文
Conformal Mapping of a 3D Face Representation onto a 2D Image for CNN Based Face Recognition 基于CNN的人脸识别中3D人脸表示到2D图像的保角映射
Pub Date : 2018-07-16 DOI: 10.1109/ICB2018.2018.00029
J. Kittler, P. Koppen, P. Kopp, P. Huber, Matthias Rätsch
Fitting 3D Morphable Face Models (3DMM) to a 2D face image allows the separation of face shape from skin texture, as well as correction for face expression. However, the recovered 3D face representation is not readily amenable to processing by convolutional neural networks (CNN). We propose a conformal mapping from a 3D mesh to a 2D image, which makes these machine learning tools accessible by 3D face data. Experiments with a CNN based face recognition system designed using the proposed representation have been carried out to validate the advocated approach. The results obtained on standard benchmarking data sets show its promise.
将3D变形面部模型(3DMM)拟合到2D面部图像中可以将面部形状与皮肤纹理分离,并对面部表情进行校正。然而,恢复的3D人脸表示不容易接受卷积神经网络(CNN)的处理。我们提出了从3D网格到2D图像的保角映射,这使得这些机器学习工具可以被3D人脸数据访问。使用该方法设计的基于CNN的人脸识别系统进行了实验,以验证所提倡的方法。在标准基准数据集上获得的结果显示了它的前景。
{"title":"Conformal Mapping of a 3D Face Representation onto a 2D Image for CNN Based Face Recognition","authors":"J. Kittler, P. Koppen, P. Kopp, P. Huber, Matthias Rätsch","doi":"10.1109/ICB2018.2018.00029","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00029","url":null,"abstract":"Fitting 3D Morphable Face Models (3DMM) to a 2D face image allows the separation of face shape from skin texture, as well as correction for face expression. However, the recovered 3D face representation is not readily amenable to processing by convolutional neural networks (CNN). We propose a conformal mapping from a 3D mesh to a 2D image, which makes these machine learning tools accessible by 3D face data. Experiments with a CNN based face recognition system designed using the proposed representation have been carried out to validate the advocated approach. The results obtained on standard benchmarking data sets show its promise.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114175050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Multifactor User Authentication with In-Air-Handwriting and Hand Geometry 多因素用户认证与空中手写和手几何
Pub Date : 2018-07-13 DOI: 10.1109/ICB2018.2018.00046
Duo Lu, Dijiang Huang, Yuli Deng, Adel Alshamrani
On wearable and Virtual Reality (VR) platforms, user authentication is a basic function, but usually a keyboard or touchscreen cannot be provided to type a password. Hand gesture and especially in-air-handwriting can be potentially used for user authentication because a gesture input interface is readily available on these platforms. However, determining whether a login request is from the legitimate user based on a piece of hand movement is challenging in both signal processing and matching, which leads to limited performance in existing systems. In this paper, we propose a multifactor user authentication framework using both the motion signal of a piece of in-air-handwriting and the geometry of hand skeleton captured by a depth camera. To demonstrate this framework, we invented a signal matching algorithm, implemented a prototype, and conducted experiments on a dataset of 100 users collected by us. Our system achieves 0.6% Equal Error Rate (EER) without spoofing attack and 3.4% EER with spoofing only data, which is a significant improvement compared to existing systems using the Dynamic Time Warping (DTW) algorithm. In addition, we presented an in-depth analysis of the utilized features to explain the reason for the performance boost.
在可穿戴和虚拟现实(VR)平台上,用户认证是一项基本功能,但通常无法提供键盘或触摸屏来输入密码。手势,特别是空中手写,可以潜在地用于用户身份验证,因为手势输入界面在这些平台上很容易获得。然而,根据手部动作来确定登录请求是否来自合法用户在信号处理和匹配方面都具有挑战性,这导致现有系统的性能有限。在本文中,我们提出了一种多因素用户认证框架,该框架利用了一段空中笔迹的运动信号和深度相机捕获的手骨架的几何形状。为了演示这个框架,我们发明了一个信号匹配算法,实现了一个原型,并在我们收集的100个用户的数据集上进行了实验。我们的系统在没有欺骗攻击的情况下实现了0.6%的相等错误率(EER),在只欺骗数据的情况下实现了3.4%的相等错误率,与使用动态时间扭曲(DTW)算法的现有系统相比,这是一个显着的改进。此外,我们还对所使用的特性进行了深入分析,以解释性能提升的原因。
{"title":"Multifactor User Authentication with In-Air-Handwriting and Hand Geometry","authors":"Duo Lu, Dijiang Huang, Yuli Deng, Adel Alshamrani","doi":"10.1109/ICB2018.2018.00046","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00046","url":null,"abstract":"On wearable and Virtual Reality (VR) platforms, user authentication is a basic function, but usually a keyboard or touchscreen cannot be provided to type a password. Hand gesture and especially in-air-handwriting can be potentially used for user authentication because a gesture input interface is readily available on these platforms. However, determining whether a login request is from the legitimate user based on a piece of hand movement is challenging in both signal processing and matching, which leads to limited performance in existing systems. In this paper, we propose a multifactor user authentication framework using both the motion signal of a piece of in-air-handwriting and the geometry of hand skeleton captured by a depth camera. To demonstrate this framework, we invented a signal matching algorithm, implemented a prototype, and conducted experiments on a dataset of 100 users collected by us. Our system achieves 0.6% Equal Error Rate (EER) without spoofing attack and 3.4% EER with spoofing only data, which is a significant improvement compared to existing systems using the Dynamic Time Warping (DTW) algorithm. In addition, we presented an in-depth analysis of the utilized features to explain the reason for the performance boost.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128587051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
SSBC 2018: Sclera Segmentation Benchmarking Competition SSBC 2018:巩膜分割标杆竞赛
Pub Date : 2018-07-13 DOI: 10.1109/BTAS.2015.7358796
Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein
This paper summarises the results of the Sclera Segmentation Benchmarking Competition (SSBC 2018). It was organised in the context of the 11th IAPR International Conference on Biometrics (ICB 2018). The aim of this competition was to record the developments on sclera segmentation in the cross-sensor environment (sclera trait captured using multiple acquiring sensors). Additionally, the competition also aimed to gain the attention of researchers on this subject of research. For the purpose of benchmarking, we have developed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1), which was used in the context of the previous versions of sclera segmentation competitions. The images in the second dataset were captured using .a mobile phone rear camera of 8-megapixel. As baseline manual segmentation mask of the sclera images from both the datasets were developed. Precision and recall-based statistical measures were employed to evaluate the effectiveness of the submitted segmentation technique and to rank them. Six algorithms were submitted towards the segmentation task. This paper analyses the results produced by these algorithms/system and defines a way forward for this subject of research. Both the datasets along with some of the accompanying ground truth/baseline mask will be freely available for research purposes upon request to authors by email.
本文总结了巩膜分割基准竞赛(SSBC 2018)的结果。它是在第11届IAPR生物识别国际会议(ICB 2018)的背景下组织的。本次比赛的目的是记录在跨传感器环境下巩膜分割的发展(使用多个采集传感器捕获巩膜特征)。此外,比赛还旨在引起研究人员对这一研究课题的关注。为了进行基准测试,我们开发了两个使用不同传感器捕获的巩膜图像数据集。第一个数据集是用数码单反相机收集的,第二个数据集是用手机相机收集的。第一个数据集是多角度巩膜数据集(MASD版本1),该数据集用于之前版本的巩膜分割竞赛。第二个数据集中的图像是用800万像素的手机后置摄像头拍摄的。作为基线,开发了两个数据集的巩膜图像的手动分割掩模。采用精确度和召回率为基础的统计度量来评估所提交的分割技术的有效性并对其进行排序。针对分割任务,提出了6种算法。本文分析了这些算法/系统产生的结果,并为该主题的研究确定了前进的方向。这两个数据集以及一些附带的地面真相/基线掩码将在作者通过电子邮件请求时免费提供,用于研究目的。
{"title":"SSBC 2018: Sclera Segmentation Benchmarking Competition","authors":"Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein","doi":"10.1109/BTAS.2015.7358796","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358796","url":null,"abstract":"This paper summarises the results of the Sclera Segmentation Benchmarking Competition (SSBC 2018). It was organised in the context of the 11th IAPR International Conference on Biometrics (ICB 2018). The aim of this competition was to record the developments on sclera segmentation in the cross-sensor environment (sclera trait captured using multiple acquiring sensors). Additionally, the competition also aimed to gain the attention of researchers on this subject of research. For the purpose of benchmarking, we have developed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1), which was used in the context of the previous versions of sclera segmentation competitions. The images in the second dataset were captured using .a mobile phone rear camera of 8-megapixel. As baseline manual segmentation mask of the sclera images from both the datasets were developed. Precision and recall-based statistical measures were employed to evaluate the effectiveness of the submitted segmentation technique and to rank them. Six algorithms were submitted towards the segmentation task. This paper analyses the results produced by these algorithms/system and defines a way forward for this subject of research. Both the datasets along with some of the accompanying ground truth/baseline mask will be freely available for research purposes upon request to authors by email.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128353162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Evolutionary Methods for Generating Synthetic MasterPrint Templates: Dictionary Attack in Fingerprint Recognition 合成主指纹模板的进化生成方法:指纹识别中的字典攻击
Pub Date : 2018-07-13 DOI: 10.1109/ICB2018.2018.00017
Aditi Roy, N. Memon, J. Togelius, A. Ross
Recent research has demonstrated the possibility of generating "Masterprints" that can be used by an adversary to launch a dictionary attack against a fingerprint recognition system. Masterprints are fingerprint images that fortuitously match with a large number of other fingerprints thereby compromising the security of a fingerprint-based biometric system, especially those equipped with small-sized fingerprint sensors. This work presents new methods for creating a synthetic MasterPrint dictionary that sequentially maximizes the probability of matching a large number of target fingerprints. Three techniques, namely Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Differential Evolution (DE) and Particle Swarm Optimization (PSO), are explored. Experiments carried out using a commercial fingerprint verification software, and public datasets, show that the proposed approaches performed quite well compared to the previously known MasterPrint generation methods.
最近的研究已经证明了生成“主指纹”的可能性,可以被对手用来对指纹识别系统发起字典攻击。“主指纹”是指偶然与大量其他指纹相匹配的指纹图像,从而危及基于指纹的生物识别系统的安全性,特别是那些配备了小型指纹传感器的系统。这项工作提出了创建合成MasterPrint字典的新方法,该字典依次最大化匹配大量目标指纹的概率。探讨了协方差矩阵自适应进化策略(CMA-ES)、差分进化(DE)和粒子群优化(PSO)三种技术。使用商业指纹验证软件和公共数据集进行的实验表明,与先前已知的MasterPrint生成方法相比,所提出的方法表现相当好。
{"title":"Evolutionary Methods for Generating Synthetic MasterPrint Templates: Dictionary Attack in Fingerprint Recognition","authors":"Aditi Roy, N. Memon, J. Togelius, A. Ross","doi":"10.1109/ICB2018.2018.00017","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00017","url":null,"abstract":"Recent research has demonstrated the possibility of generating \"Masterprints\" that can be used by an adversary to launch a dictionary attack against a fingerprint recognition system. Masterprints are fingerprint images that fortuitously match with a large number of other fingerprints thereby compromising the security of a fingerprint-based biometric system, especially those equipped with small-sized fingerprint sensors. This work presents new methods for creating a synthetic MasterPrint dictionary that sequentially maximizes the probability of matching a large number of target fingerprints. Three techniques, namely Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Differential Evolution (DE) and Particle Swarm Optimization (PSO), are explored. Experiments carried out using a commercial fingerprint verification software, and public datasets, show that the proposed approaches performed quite well compared to the previously known MasterPrint generation methods.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133498030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Two-Stream Part-Based Deep Representation for Human Attribute Recognition 基于双流部分的人类属性识别深度表示
Pub Date : 2018-07-13 DOI: 10.1109/ICB2018.2018.00024
R. Anwer, F. Khan, Jorma T. Laaksonen
Recognizing human attributes in unconstrained environments is a challenging computer vision problem. State-of-the-art approaches to human attribute recognition are based on convolutional neural networks (CNNs). The de facto practice when training these CNNs on a large labeled image dataset is to take RGB pixel values of an image as input to the network. In this work, we propose a two-stream part-based deep representation for human attribute classification. Besides the standard RGB stream, we train a deep network by using mapped coded images with explicit texture information, that complements the standard RGB deep model. To integrate human body parts knowledge, we employ the deformable part-based models together with our two-stream deep model. Experiments are performed on the challenging Human Attributes (HAT-27) Dataset consisting of 27 different human attributes. Our results clearly show that (a) the two-stream deep network provides consistent gain in performance over the standard RGB model and (b) that the attribute classification results are further improved with our two-stream part-based deep representations, leading to state-of-the-art results.
在不受约束的环境中识别人类属性是一个具有挑战性的计算机视觉问题。最先进的人类属性识别方法是基于卷积神经网络(cnn)。在大型标记图像数据集上训练这些cnn时,事实上的做法是将图像的RGB像素值作为网络的输入。在这项工作中,我们提出了一种基于两流部分的人类属性分类深度表示方法。除了标准RGB流之外,我们还使用带有明确纹理信息的映射编码图像来训练深度网络,以补充标准RGB深度模型。为了整合人体部位知识,我们采用了基于可变形部位的模型和我们的两流深度模型。在包含27种不同人类属性的挑战性人类属性(HAT-27)数据集上进行实验。我们的结果清楚地表明:(a)两流深度网络在标准RGB模型的性能上提供了一致的增益,(b)我们基于两流部分的深度表示进一步改进了属性分类结果,从而得到了最先进的结果。
{"title":"Two-Stream Part-Based Deep Representation for Human Attribute Recognition","authors":"R. Anwer, F. Khan, Jorma T. Laaksonen","doi":"10.1109/ICB2018.2018.00024","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00024","url":null,"abstract":"Recognizing human attributes in unconstrained environments is a challenging computer vision problem. State-of-the-art approaches to human attribute recognition are based on convolutional neural networks (CNNs). The de facto practice when training these CNNs on a large labeled image dataset is to take RGB pixel values of an image as input to the network. In this work, we propose a two-stream part-based deep representation for human attribute classification. Besides the standard RGB stream, we train a deep network by using mapped coded images with explicit texture information, that complements the standard RGB deep model. To integrate human body parts knowledge, we employ the deformable part-based models together with our two-stream deep model. Experiments are performed on the challenging Human Attributes (HAT-27) Dataset consisting of 27 different human attributes. Our results clearly show that (a) the two-stream deep network provides consistent gain in performance over the standard RGB model and (b) that the attribute classification results are further improved with our two-stream part-based deep representations, leading to state-of-the-art results.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121939241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
LivDet 2017 Fingerprint Liveness Detection Competition 2017 LivDet 2017指纹活性检测大赛
Pub Date : 2018-03-14 DOI: 10.1109/ICB2018.2018.00052
V. Mura, G. Orrú, Roberto Casula, A. Sibiriu, G. Loi, Pierluigi Tuveri, Luca Ghiani, G. Marcialis
Fingerprint Presentation Attack Detection (FPAD) deals with distinguishing images coming from artificial replicas of the fingerprint characteristic, made up of materials like silicone, gelatine or latex, and images coming from alive fingerprints. Images are captured by modern scanners, typically relying on solid-state or optical technologies. Since from 2009, the Fingerprint Liveness Detection Competition (LivDet) aims to assess the performance of the state-of-the-art algorithms according to a rigorous experimental protocol and, at the same time, a simple overview of the basic achievements. The competition is open to all academics research centers and all companies that work in this field. The positive, increasing trend of the participants number, which supports the success of this initiative, is confirmed even this year: 17 algorithms were submitted to the competition, with a larger involvement of companies and academies. This means that the topic is relevant for both sides, and points out that a lot of work must be done in terms of fundamental and applied research.
指纹呈现攻击检测(FPAD)处理区分来自指纹特征的人造复制品的图像,由硅树脂、明胶或乳胶等材料组成,以及来自活体指纹的图像。图像由现代扫描仪捕获,通常依赖于固态或光学技术。自2009年以来,指纹活性检测竞赛(LivDet)旨在根据严格的实验协议评估最先进算法的性能,同时对基本成果进行简单概述。该竞赛向所有在该领域工作的学术、研究中心和所有公司开放。参与人数的积极增长趋势支持了这项计划的成功,甚至在今年也得到了证实:17种算法被提交给比赛,更多的公司和学院参与其中。这意味着该课题对双方都有相关性,并指出在基础研究和应用研究方面必须做大量工作。
{"title":"LivDet 2017 Fingerprint Liveness Detection Competition 2017","authors":"V. Mura, G. Orrú, Roberto Casula, A. Sibiriu, G. Loi, Pierluigi Tuveri, Luca Ghiani, G. Marcialis","doi":"10.1109/ICB2018.2018.00052","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00052","url":null,"abstract":"Fingerprint Presentation Attack Detection (FPAD) deals with distinguishing images coming from artificial replicas of the fingerprint characteristic, made up of materials like silicone, gelatine or latex, and images coming from alive fingerprints. Images are captured by modern scanners, typically relying on solid-state or optical technologies. Since from 2009, the Fingerprint Liveness Detection Competition (LivDet) aims to assess the performance of the state-of-the-art algorithms according to a rigorous experimental protocol and, at the same time, a simple overview of the basic achievements. The competition is open to all academics research centers and all companies that work in this field. The positive, increasing trend of the participants number, which supports the success of this initiative, is confirmed even this year: 17 algorithms were submitted to the competition, with a larger involvement of companies and academies. This means that the topic is relevant for both sides, and points out that a lot of work must be done in terms of fundamental and applied research.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116224199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Comparative Study of Digital Fingerprint Quality Assessment Metrics 数字指纹质量评价指标的比较研究
Pub Date : 2018-02-20 DOI: 10.1109/ICB2018.2018.00014
Zhigang Yao, J. L. Bars, C. Charrier, C. Rosenberger
The quality assessment of biometric data proceeds as a toll to decide whether a biometric sample may be used to generate the user's reference template. Many studies showed its signi?cant impact on the subsequent performance of the biometric system. Since many metrics are proposed for this purpose by researchers or standardization institutions, their relevance should be studied in particular to evaluate their relative usefulness. This paper provides a comparative study of fingerprint quality assessment (FQA) metrics. We consider the enrollment selection validation approach to perform an objective comparison of them. We show the efficiency of 7 well known FQA metrics on 9 datasets. Results show a dependency of these metrics to the dataset (i.e. the fingerprint sensor) and for matchers a similar behavior.
生物特征数据的质量评估作为决定生物特征样本是否可用于生成用户参考模板的一个步骤。许多研究都显示了它的迹象。不会影响生物识别系统的后续性能。由于研究人员或标准化机构为此目的提出了许多度量标准,因此应特别研究它们的相关性,以评估它们的相对有用性。本文对指纹质量评价(FQA)指标进行了比较研究。我们考虑入组选择验证方法来对它们进行客观比较。我们在9个数据集上展示了7个众所周知的FQA指标的效率。结果显示这些指标依赖于数据集(即指纹传感器),并且对于匹配器有类似的行为。
{"title":"Comparative Study of Digital Fingerprint Quality Assessment Metrics","authors":"Zhigang Yao, J. L. Bars, C. Charrier, C. Rosenberger","doi":"10.1109/ICB2018.2018.00014","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00014","url":null,"abstract":"The quality assessment of biometric data proceeds as a toll to decide whether a biometric sample may be used to generate the user's reference template. Many studies showed its signi?cant impact on the subsequent performance of the biometric system. Since many metrics are proposed for this purpose by researchers or standardization institutions, their relevance should be studied in particular to evaluate their relative usefulness. This paper provides a comparative study of fingerprint quality assessment (FQA) metrics. We consider the enrollment selection validation approach to perform an objective comparison of them. We show the efficiency of 7 well known FQA metrics on 9 datasets. Results show a dependency of these metrics to the dataset (i.e. the fingerprint sensor) and for matchers a similar behavior.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125174024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Boosting Face in Video Recognition via CNN Based Key Frame Extraction 基于CNN的关键帧提取增强视频识别中的人脸
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00030
Xuan Qi, Chen Liu, S. Schuckers
Face in video recognition (FiVR) technology is widely applied in various fields such as video analytics and real-time video surveillance. However, FiVR technology also faces the challenges of high-volume video data, real-time processing requirement, as well as improving the performance of face recognition (FR) algorithms. To overcome these challenges, frame selection becomes a necessary and beneficial step before the FR stage. In this paper, we propose a CNN-based key-frame extraction (KFE) engine with GPU acceleration, employing our innovative Face Quality Assessment (FQA) module. For theoretical performance analysis of the KFE engine, we evaluated representative one-person video datasets such as PaSC, FiA and ChokePoint using ROC and DET curves. For performance analysis under practical scenario, we evaluated multi-person videos using ChokePoint dataset as well as in-house captured full-HD videos. The experimental results show that our KFE engine can dramatically reduce the data volume while improving the FR performance. In addition, our KFE engine can achieve higher than real-time performance with GPU acceleration in dealing with HD videos in real application scenarios.
视频人脸识别(FiVR)技术广泛应用于视频分析和实时视频监控等各个领域。然而,FiVR技术也面临着大容量视频数据、实时处理要求以及提高人脸识别(FR)算法性能的挑战。为了克服这些挑战,框架选择成为FR阶段之前的必要和有益的步骤。在本文中,我们提出了一个基于cnn的关键帧提取(KFE)引擎和GPU加速,采用我们创新的人脸质量评估(FQA)模块。对于KFE引擎的理论性能分析,我们使用ROC和DET曲线评估了代表性的单人视频数据集,如PaSC, FiA和ChokePoint。为了在实际场景下进行性能分析,我们使用ChokePoint数据集评估了多人视频以及内部捕获的全高清视频。实验结果表明,KFE引擎在显著降低数据量的同时,提高了过滤性能。此外,在实际应用场景中,我们的KFE引擎在GPU加速下处理高清视频时可以达到高于实时的性能。
{"title":"Boosting Face in Video Recognition via CNN Based Key Frame Extraction","authors":"Xuan Qi, Chen Liu, S. Schuckers","doi":"10.1109/ICB2018.2018.00030","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00030","url":null,"abstract":"Face in video recognition (FiVR) technology is widely applied in various fields such as video analytics and real-time video surveillance. However, FiVR technology also faces the challenges of high-volume video data, real-time processing requirement, as well as improving the performance of face recognition (FR) algorithms. To overcome these challenges, frame selection becomes a necessary and beneficial step before the FR stage. In this paper, we propose a CNN-based key-frame extraction (KFE) engine with GPU acceleration, employing our innovative Face Quality Assessment (FQA) module. For theoretical performance analysis of the KFE engine, we evaluated representative one-person video datasets such as PaSC, FiA and ChokePoint using ROC and DET curves. For performance analysis under practical scenario, we evaluated multi-person videos using ChokePoint dataset as well as in-house captured full-HD videos. The experimental results show that our KFE engine can dramatically reduce the data volume while improving the FR performance. In addition, our KFE engine can achieve higher than real-time performance with GPU acceleration in dealing with HD videos in real application scenarios.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127547313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Securing Minutia Cylinder Codes for Fingerprints through Physically Unclonable Functions: An Exploratory Study 通过物理不可克隆功能保护指纹的微小圆柱体代码:一项探索性研究
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00019
Rosario Arjona, Miguel A. Prada-Delgado, I. Baturone, A. Ross
A number of personal devices, such as smartphones, have incorporated fingerprint recognition solutions for user authentication purposes. This work proposes a dual-factor fingerprint matching scheme based on P-MCCs (Protected Minutia Cylinder-Codes) generated from fingerprint images and PUFs (Physically Unclonable Functions) generated from device SRAMs (Static Random Access Memories). Combining the fingerprint identifier with the device identifier results in a secure template satisfying the discriminability, irreversibility, revocability, and unlinkability properties, which are strongly desired for data privacy and security. Experiments convey the benefits of the proposed dual-factor authentication mechanism in enhancing the security of personal devices that utilize biometric authentication schemes.
许多个人设备,如智能手机,已经集成了用于用户身份验证的指纹识别解决方案。本研究提出了一种基于指纹图像生成的p - mcs (Protected Minutia圆柱体代码)和设备sram(静态随机存取存储器)生成的puf(物理不可克隆函数)的双因素指纹匹配方案。将指纹标识符与设备标识符相结合,生成的安全模板满足数据隐私和安全所迫切需要的可鉴别性、不可逆性、可撤销性和不可链接性等特性。实验表明,所提出的双因素认证机制在提高使用生物识别认证方案的个人设备的安全性方面具有优势。
{"title":"Securing Minutia Cylinder Codes for Fingerprints through Physically Unclonable Functions: An Exploratory Study","authors":"Rosario Arjona, Miguel A. Prada-Delgado, I. Baturone, A. Ross","doi":"10.1109/ICB2018.2018.00019","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00019","url":null,"abstract":"A number of personal devices, such as smartphones, have incorporated fingerprint recognition solutions for user authentication purposes. This work proposes a dual-factor fingerprint matching scheme based on P-MCCs (Protected Minutia Cylinder-Codes) generated from fingerprint images and PUFs (Physically Unclonable Functions) generated from device SRAMs (Static Random Access Memories). Combining the fingerprint identifier with the device identifier results in a secure template satisfying the discriminability, irreversibility, revocability, and unlinkability properties, which are strongly desired for data privacy and security. Experiments convey the benefits of the proposed dual-factor authentication mechanism in enhancing the security of personal devices that utilize biometric authentication schemes.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129777534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Improving 2D Face Recognition via Discriminative Face Depth Estimation 基于判别性人脸深度估计的二维人脸识别改进
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00031
Jiyun Cui, Hao Zhang, Hu Han, S. Shan, Xilin Chen
As face recognition progresses from constrained scenarios to unconstrained scenarios, new challenges such as large pose, bad illumination, and partial occlusion, are encountered. While 3D or multi-modality RGB-D sensors are helpful for face recognition systems to achieve robustness against these challenges, the requirement of new sensors limits their application scenarios. In our paper, we propose a discriminative face depth estimation approach to improve 2D face recognition accuracies under unconstrained scenarios. Our discriminative depth estimation method uses a cascaded FCN and CNN architecture, in which FCN aims at recovering the depth from an RGB image, and CNN retains the separability of individual subjects. The estimated depth information is then used as a complementary modality to RGB for face recognition tasks. Experiments on two public datasets and a dataset we collect show that the proposed face recognition method using RGB and estimated depth information can achieve better accuracy than using RGB modality alone.
随着人脸识别从受约束场景向无约束场景的发展,人脸识别面临着姿态大、光照差、局部遮挡等新的挑战。在3 d或多模RGB-D传感器有助于人脸识别系统实现鲁棒性对这些挑战,新的传感器的要求限制了他们的应用场景。在本文中,我们提出了一种判别性人脸深度估计方法来提高无约束场景下的二维人脸识别精度。我们的判别深度估计方法使用了FCN和CNN的级联架构,其中FCN旨在从RGB图像中恢复深度,而CNN保留了个体主体的可分离性。然后将估计的深度信息作为RGB的补充模态用于人脸识别任务。在两个公共数据集和我们收集的数据集上的实验表明,使用RGB和估计深度信息的人脸识别方法比单独使用RGB模式可以获得更好的准确率。
{"title":"Improving 2D Face Recognition via Discriminative Face Depth Estimation","authors":"Jiyun Cui, Hao Zhang, Hu Han, S. Shan, Xilin Chen","doi":"10.1109/ICB2018.2018.00031","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00031","url":null,"abstract":"As face recognition progresses from constrained scenarios to unconstrained scenarios, new challenges such as large pose, bad illumination, and partial occlusion, are encountered. While 3D or multi-modality RGB-D sensors are helpful for face recognition systems to achieve robustness against these challenges, the requirement of new sensors limits their application scenarios. In our paper, we propose a discriminative face depth estimation approach to improve 2D face recognition accuracies under unconstrained scenarios. Our discriminative depth estimation method uses a cascaded FCN and CNN architecture, in which FCN aims at recovering the depth from an RGB image, and CNN retains the separability of individual subjects. The estimated depth information is then used as a complementary modality to RGB for face recognition tasks. Experiments on two public datasets and a dataset we collect show that the proposed face recognition method using RGB and estimated depth information can achieve better accuracy than using RGB modality alone.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126693643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
期刊
2018 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1