首页 > 最新文献

2019 International Conference on Biometrics (ICB)最新文献

英文 中文
NIR-to-VIS Face Recognition via Embedding Relations and Coordinates of the Pairwise Features 基于成对特征嵌入关系和坐标的nir - vis人脸识别
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987306
Myeongah Cho, Tae-Young Chung, Taeoh Kim, Sangyoun Lee
NIR-to-VIS face recognition is identifying faces of two different domains by extracting domain-invariant features. However, this is a challenging problem due to the two different domain characteristics, and the lack of NIR face dataset. In order to reduce domain discrepancy while using the existing face recognition models, we propose a ’Relation Module’ which can simply add-on to any face recognition models. The local features extracted from face image contain information of each component of the face. Based on two different domain characteristics, to use the relationships between local features is more domain-invariant than to use it as it is. In addition to these relationships, positional information such as distance from lips to chin or eye to eye, also provides domain-invariant information. In our Relation Module, Relation Layer implicitly captures relationships, and Coordinates Layer models the positional information. Also, our proposed Triplet loss with conditional margin reduces intra-class variation in training, and resulting in additional performance improvements.Different from the general face recognition models, our add-on module does not need to pre-train with the large scale dataset. The proposed module fine-tuned only with CASIA NIR-VIS 2.0 database. With the proposed module, we achieve 14.81% rank-1 accuracy and 15.47% verification rate of 0.1% FAR improvements compare to two baseline models.
NIR-to-VIS人脸识别是通过提取域不变特征来识别两个不同域的人脸。然而,由于两种不同的域特征,以及缺乏近红外人脸数据集,这是一个具有挑战性的问题。为了在使用现有的人脸识别模型时减少域差异,我们提出了一个“关联模块”,它可以简单地附加到任何人脸识别模型上。从人脸图像中提取的局部特征包含了人脸各分量的信息。基于两种不同的域特征,使用局部特征之间的关系比直接使用更具有域不变性。除了这些关系之外,位置信息,如从嘴唇到下巴或眼睛到眼睛的距离,也提供了域不变信息。在我们的关系模块中,关系层隐式地捕获关系,坐标层对位置信息建模。此外,我们提出的具有条件边际的Triplet损失减少了训练中的类内差异,并带来了额外的性能改进。与一般的人脸识别模型不同,我们的附加模块不需要使用大规模数据集进行预训练。该模块仅使用CASIA NIR-VIS 2.0数据库进行微调。与两个基线模型相比,我们在0.1% FAR的基础上实现了14.81%的rank-1准确率和15.47%的验证率。
{"title":"NIR-to-VIS Face Recognition via Embedding Relations and Coordinates of the Pairwise Features","authors":"Myeongah Cho, Tae-Young Chung, Taeoh Kim, Sangyoun Lee","doi":"10.1109/ICB45273.2019.8987306","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987306","url":null,"abstract":"NIR-to-VIS face recognition is identifying faces of two different domains by extracting domain-invariant features. However, this is a challenging problem due to the two different domain characteristics, and the lack of NIR face dataset. In order to reduce domain discrepancy while using the existing face recognition models, we propose a ’Relation Module’ which can simply add-on to any face recognition models. The local features extracted from face image contain information of each component of the face. Based on two different domain characteristics, to use the relationships between local features is more domain-invariant than to use it as it is. In addition to these relationships, positional information such as distance from lips to chin or eye to eye, also provides domain-invariant information. In our Relation Module, Relation Layer implicitly captures relationships, and Coordinates Layer models the positional information. Also, our proposed Triplet loss with conditional margin reduces intra-class variation in training, and resulting in additional performance improvements.Different from the general face recognition models, our add-on module does not need to pre-train with the large scale dataset. The proposed module fine-tuned only with CASIA NIR-VIS 2.0 database. With the proposed module, we achieve 14.81% rank-1 accuracy and 15.47% verification rate of 0.1% FAR improvements compare to two baseline models.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134541281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Iris + Ocular: Generalized Iris Presentation Attack Detection Using Multiple Convolutional Neural Networks 虹膜+眼部:基于多重卷积神经网络的广义虹膜呈现攻击检测
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987261
Steven Hoffman, Renu Sharma, A. Ross
An iris recognition system is vulnerable to presentation attacks, or PAs, where an adversary presents artifacts such as printed eyes, plastic eyes or cosmetic contact lenses to defeat the system. Existing PA detection schemes do not have good generalization capability and often fail in cross-dataset scenarios, where training and testing are performed on vastly different datasets. In this work, we address this problem by fusing the outputs of three Convolutional Neural Network (CNN) based PA detectors, each of which examines different portions of the input image. The first CNN (I-CNN) focuses on the iris region only, the second CNN (F-CNN) uses the entire ocular region and the third CNN (S-CNN) uses a subset of patches sampled from the ocular region. Experiments conducted on two publicly available datasets (LivDetW15 and BERC-IF) and on a proprietary dataset (IrisID) confirm that the use of a bag of CNNs is effective in improving the generalizability of PA detectors.
虹膜识别系统很容易受到展示攻击(pa)的攻击,攻击者会展示打印的眼睛、塑料眼睛或化妆品隐形眼镜等人工制品来击败该系统。现有的PA检测方案没有很好的泛化能力,并且经常在跨数据集场景中失败,在这些场景中,训练和测试是在非常不同的数据集上进行的。在这项工作中,我们通过融合三个基于卷积神经网络(CNN)的PA检测器的输出来解决这个问题,每个检测器检查输入图像的不同部分。第一个CNN (I-CNN)只关注虹膜区域,第二个CNN (F-CNN)使用整个眼部区域,第三个CNN (S-CNN)使用从眼部区域采样的斑块子集。在两个公开可用的数据集(LivDetW15和BERC-IF)和一个专有数据集(IrisID)上进行的实验证实,使用一袋cnn在提高PA检测器的泛化性方面是有效的。
{"title":"Iris + Ocular: Generalized Iris Presentation Attack Detection Using Multiple Convolutional Neural Networks","authors":"Steven Hoffman, Renu Sharma, A. Ross","doi":"10.1109/ICB45273.2019.8987261","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987261","url":null,"abstract":"An iris recognition system is vulnerable to presentation attacks, or PAs, where an adversary presents artifacts such as printed eyes, plastic eyes or cosmetic contact lenses to defeat the system. Existing PA detection schemes do not have good generalization capability and often fail in cross-dataset scenarios, where training and testing are performed on vastly different datasets. In this work, we address this problem by fusing the outputs of three Convolutional Neural Network (CNN) based PA detectors, each of which examines different portions of the input image. The first CNN (I-CNN) focuses on the iris region only, the second CNN (F-CNN) uses the entire ocular region and the third CNN (S-CNN) uses a subset of patches sampled from the ocular region. Experiments conducted on two publicly available datasets (LivDetW15 and BERC-IF) and on a proprietary dataset (IrisID) confirm that the use of a bag of CNNs is effective in improving the generalizability of PA detectors.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133988154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Hunting for Fashion via Large Scale Soft Biometrics Analysis 通过大规模软生物特征分析寻找时尚
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987314
Xiaoyuan Wang, Li Lu, Qijun Zhao, K. Ubul
Fashion analysis has gained increasing attention thanks to its immense potential in fashion industry, precision marketing, and sociological analysis, etc. While a lot of fashion analysis work has been done for clothing and makeup, few of them address the problem from the perspective of large scale soft biometrics. In this paper, we focus on soft biometric attributes on human faces, particularly lip color and hair color, based on the analysis of which using a large scale data set we aim to reveal the fashion trend of lipstick color and hair color. To this end, we first perform the following steps on each image: face detection, occlusion detection, face parsing, and color feature extraction from the lip and hair regions. We then perform clustering based on the extracted color features in the given large scale data set. In the experiments, we collect from the Internet 15, 366 mouth-occluded and 14, 580 hair-occluded face images to train an effective occlusion detector such that noisy face images with occluded mouths/hairs are excluded from the subsequent fashion analysis, and another more than 20, 000 face images for analyzing the fashion trend of lipstick and hair colors. Our experimental results on the collected large scale data set prove the effectiveness of our proposed method.
时尚分析因其在时尚产业、精准营销、社会学分析等领域的巨大潜力而受到越来越多的关注。虽然对服装和化妆品进行了大量的时尚分析工作,但很少有人从大规模软生物识别的角度来解决这个问题。在本文中,我们重点研究了人脸的软生物特征属性,特别是唇色和发色,在此基础上,利用大规模的数据集,我们旨在揭示口红颜色和发色的流行趋势。为此,我们首先对每张图像执行以下步骤:人脸检测、遮挡检测、人脸解析以及唇部和毛发区域的颜色特征提取。然后,我们根据提取的颜色特征在给定的大规模数据集中进行聚类。在实验中,我们从互联网上收集了15366张嘴巴被遮挡的人脸图像和14580张头发被遮挡的人脸图像来训练一个有效的遮挡检测器,使得嘴巴/头发被遮挡的嘈杂人脸图像被排除在随后的时尚分析之外,另外还有2万多张人脸图像用于分析口红和头发颜色的时尚趋势。在大规模数据集上的实验结果证明了该方法的有效性。
{"title":"Hunting for Fashion via Large Scale Soft Biometrics Analysis","authors":"Xiaoyuan Wang, Li Lu, Qijun Zhao, K. Ubul","doi":"10.1109/ICB45273.2019.8987314","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987314","url":null,"abstract":"Fashion analysis has gained increasing attention thanks to its immense potential in fashion industry, precision marketing, and sociological analysis, etc. While a lot of fashion analysis work has been done for clothing and makeup, few of them address the problem from the perspective of large scale soft biometrics. In this paper, we focus on soft biometric attributes on human faces, particularly lip color and hair color, based on the analysis of which using a large scale data set we aim to reveal the fashion trend of lipstick color and hair color. To this end, we first perform the following steps on each image: face detection, occlusion detection, face parsing, and color feature extraction from the lip and hair regions. We then perform clustering based on the extracted color features in the given large scale data set. In the experiments, we collect from the Internet 15, 366 mouth-occluded and 14, 580 hair-occluded face images to train an effective occlusion detector such that noisy face images with occluded mouths/hairs are excluded from the subsequent fashion analysis, and another more than 20, 000 face images for analyzing the fashion trend of lipstick and hair colors. Our experimental results on the collected large scale data set prove the effectiveness of our proposed method.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125758969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Impact of Different Fabrication Materials on Fingerprint Presentation Attack Detection 不同制作材料对指纹呈现攻击检测的影响
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987425
Lázaro J. González Soler, M. Gomez-Barrero, Leonardo Chang, Airel Pérez Suárez, C. Busch
Presentation Attack Detection (PAD) is the task of determining whether a sample stems from a live subject (bona fide presentation) or from an artificial replica (Presentation Attack Instrument, PAI). Several PAD approaches have shown high effectiveness to successfully detect PAIs when the materials used for the fabrication of these PAIs are known a priori. However, most of these PAD methods do not take into account the characteristics of PAIs’ species in order to generalise to new, realistic and more challenging scenarios, where materials might be unknown. Based on that fact, in this work, we explore the impact of different PAI species, fabricated with different materials, on several local-based descriptors combined with the Fisher Vector feature encoding, in order to increase the robustness to unknown attacks. The experimental results over the well-established benchmarks of the LivDet 2011, LivDet 2013 and LivDet 2015 competitions reported error rates outperforming the top state-of-the-art in the presence of unknown attacks. Moreover, the evaluation revealed the differences in the detection performance due to the variability between the PAI species.
演示攻击检测(PAD)的任务是确定样本是来自活体(真实演示)还是来自人工副本(演示攻击工具,PAI)。当用于制造这些PAIs的材料先验已知时,几种PAD方法已经显示出成功检测PAIs的高效率。然而,这些PAD方法中的大多数都没有考虑到PAIs物种的特征,以便推广到新的,现实的和更具挑战性的场景,其中材料可能是未知的。基于这一事实,在本工作中,我们探索了由不同材料制成的不同PAI物种对结合Fisher向量特征编码的几种基于局部的描述符的影响,以提高对未知攻击的鲁棒性。在LivDet 2011、LivDet 2013和LivDet 2015竞赛的既定基准上的实验结果表明,在存在未知攻击的情况下,错误率优于最先进的技术。此外,评估还揭示了由于PAI物种之间的变异性而导致的检测性能差异。
{"title":"On the Impact of Different Fabrication Materials on Fingerprint Presentation Attack Detection","authors":"Lázaro J. González Soler, M. Gomez-Barrero, Leonardo Chang, Airel Pérez Suárez, C. Busch","doi":"10.1109/ICB45273.2019.8987425","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987425","url":null,"abstract":"Presentation Attack Detection (PAD) is the task of determining whether a sample stems from a live subject (bona fide presentation) or from an artificial replica (Presentation Attack Instrument, PAI). Several PAD approaches have shown high effectiveness to successfully detect PAIs when the materials used for the fabrication of these PAIs are known a priori. However, most of these PAD methods do not take into account the characteristics of PAIs’ species in order to generalise to new, realistic and more challenging scenarios, where materials might be unknown. Based on that fact, in this work, we explore the impact of different PAI species, fabricated with different materials, on several local-based descriptors combined with the Fisher Vector feature encoding, in order to increase the robustness to unknown attacks. The experimental results over the well-established benchmarks of the LivDet 2011, LivDet 2013 and LivDet 2015 competitions reported error rates outperforming the top state-of-the-art in the presence of unknown attacks. Moreover, the evaluation revealed the differences in the detection performance due to the variability between the PAI species.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129304308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Directed Adversarial Attacks on Fingerprints using Attributions 利用属性对指纹进行定向对抗性攻击
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987267
S. Fernandes, Sunny Raj, Eddy Ortiz, Iustina Vintila, Sumit Kumar Jha
Fingerprint recognition systems verify the identity of individuals and provide access to secure information in various commercial applications. However, with advancements in artificial intelligence, fingerprint-based security methods are vulnerable to attack. Such a breach has the potential to compromise confidential, private and valuable information. In this paper, we attack a state-of-the-art fingerprint recognition system based on transfer learning. Our approach uses attribution analysis to identify the fingerprint region crucial to correct classification, and then perturbs the fingerprint using error masks derived from a neural network to generate an adversarial fingerprint.Image quality assessment metrics applied to calculate the difference between the original and perturbed fingerprints include average difference, maximum difference, normalized absolute error, and peak signal to noise ratio. On the ATVS fingerprint dataset, the differences between these values in the original and corresponding perturbed fingerprint images are negligible. Further, the VeriFinger SDK is used to detect the minutiae and perform matching between the original and perturbed fingerprints. The matching score is above 250, which reinforces the fact that there is virtually no loss between the original and perturbed fingerprints.
指纹识别系统验证个人的身份,并在各种商业应用中提供访问安全信息的途径。然而,随着人工智能的进步,基于指纹的安全方法很容易受到攻击。这样的违规行为有可能损害机密、私人和有价值的信息。本文研究了一种基于迁移学习的指纹识别系统。该方法首先利用归因分析方法识别出对正确分类至关重要的指纹区域,然后利用神经网络生成的误差掩码对指纹进行扰动,生成对抗指纹。图像质量评估指标包括平均差值、最大差值、归一化绝对误差和峰值信噪比。在ATVS指纹数据集上,这些值在原始指纹图像和相应的扰动指纹图像中的差异可以忽略不计。此外,VeriFinger SDK用于检测细节,并在原始指纹和受干扰的指纹之间进行匹配。匹配分数在250以上,这进一步证明了原始指纹和被干扰的指纹之间几乎没有丢失。
{"title":"Directed Adversarial Attacks on Fingerprints using Attributions","authors":"S. Fernandes, Sunny Raj, Eddy Ortiz, Iustina Vintila, Sumit Kumar Jha","doi":"10.1109/ICB45273.2019.8987267","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987267","url":null,"abstract":"Fingerprint recognition systems verify the identity of individuals and provide access to secure information in various commercial applications. However, with advancements in artificial intelligence, fingerprint-based security methods are vulnerable to attack. Such a breach has the potential to compromise confidential, private and valuable information. In this paper, we attack a state-of-the-art fingerprint recognition system based on transfer learning. Our approach uses attribution analysis to identify the fingerprint region crucial to correct classification, and then perturbs the fingerprint using error masks derived from a neural network to generate an adversarial fingerprint.Image quality assessment metrics applied to calculate the difference between the original and perturbed fingerprints include average difference, maximum difference, normalized absolute error, and peak signal to noise ratio. On the ATVS fingerprint dataset, the differences between these values in the original and corresponding perturbed fingerprint images are negligible. Further, the VeriFinger SDK is used to detect the minutiae and perform matching between the original and perturbed fingerprints. The matching score is above 250, which reinforces the fact that there is virtually no loss between the original and perturbed fingerprints.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124419580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Harms of Demographic Bias in Deep Face Recognition Research 人口统计学偏差在深度人脸识别研究中的危害
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987334
R. Vicente-Garcia, Lukasz Wandzik, Louisa Grabner, J. Krüger
In this work we demonstrate the existence of demographic bias in the face representations of currently popular deep-learning-based face recognition models, exposing a bad research and development practice that may lead to a systematic discrimination of certain demographic groups in critical scenarios like automated border control. Furthermore, through the simulation of the template morphing attack, we reveal significant security risks that derive from demographic bias in current deep face models. This widely ignored problem poses important questions on fairness and accountability in face recognition.
在这项工作中,我们证明了目前流行的基于深度学习的人脸识别模型的人脸表示中存在人口统计学偏差,暴露了一种糟糕的研究和开发实践,可能导致在自动边境控制等关键场景中对某些人口统计学群体的系统性歧视。此外,通过对模板变形攻击的模拟,我们揭示了当前深度人脸模型中由于人口统计偏差而产生的重大安全风险。这个被广泛忽视的问题对人脸识别的公平性和问责制提出了重要的问题。
{"title":"The Harms of Demographic Bias in Deep Face Recognition Research","authors":"R. Vicente-Garcia, Lukasz Wandzik, Louisa Grabner, J. Krüger","doi":"10.1109/ICB45273.2019.8987334","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987334","url":null,"abstract":"In this work we demonstrate the existence of demographic bias in the face representations of currently popular deep-learning-based face recognition models, exposing a bad research and development practice that may lead to a systematic discrimination of certain demographic groups in critical scenarios like automated border control. Furthermore, through the simulation of the template morphing attack, we reveal significant security risks that derive from demographic bias in current deep face models. This widely ignored problem poses important questions on fairness and accountability in face recognition.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122120891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Learning Global Fingerprint Features by Training a Fully Convolutional Network with Local Patches 用局部补丁训练全卷积网络学习全局指纹特征
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987387
Ruilin Li, Dehua Song, Yuhang Liu, Jufu Feng
Learning fingerprint representations is of critical importance in fingerprint indexing algorithms. Convolutional neural networks (CNNs) provide fingerprint features that perform remarkably well. In previous CNN based methods, global fingerprint features are acquired by training with entire fingerprints or by aggregating local descriptors. The former method does not make full use of the information of matched minutiae, thereby achieving relatively-low performance. While the latter way needs to extract all local features, which is time-consuming. In this paper, we propose an efficient strategy to learn global features making full use of the information of matched minutiae. We train a fully convolutional network (FCN) with local patches. Patch classes contain more information than the original fingerprint classes, and such information is helpful to learn discriminative features. In the indexing stage, we utilize the capability of FCN to get global features of whole fingerprints. Furthermore, the learned features are robust to translation, rotation, and occlusion. Therefore, we do not need to align fingerprints. The proposed approach outperforms the state-of-the-art on benchmark datasets. We achieve 99.83% identification accuracy at the penetration rate of 1% using only 256-bytes per fingerprint on NIST SD4.
在指纹索引算法中,指纹表征的学习是至关重要的。卷积神经网络(cnn)提供了非常出色的指纹特征。在以前的基于CNN的方法中,通过对整个指纹进行训练或对局部描述符进行聚合来获得全局指纹特征。前一种方法没有充分利用匹配细节的信息,性能相对较低。后一种方法需要提取所有的局部特征,耗时较长。本文提出了一种充分利用匹配细节信息学习全局特征的有效策略。我们训练了一个带有局部补丁的全卷积网络(FCN)。补丁类比原始指纹类包含更多的信息,这些信息有助于识别特征的学习。在索引阶段,我们利用FCN的能力来获取整个指纹的全局特征。此外,学习到的特征对平移、旋转和遮挡具有鲁棒性。因此,我们不需要对齐指纹。所提出的方法在基准数据集上优于最先进的方法。我们在NIST SD4上每个指纹仅使用256字节,在1%的渗透率下实现了99.83%的识别准确率。
{"title":"Learning Global Fingerprint Features by Training a Fully Convolutional Network with Local Patches","authors":"Ruilin Li, Dehua Song, Yuhang Liu, Jufu Feng","doi":"10.1109/ICB45273.2019.8987387","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987387","url":null,"abstract":"Learning fingerprint representations is of critical importance in fingerprint indexing algorithms. Convolutional neural networks (CNNs) provide fingerprint features that perform remarkably well. In previous CNN based methods, global fingerprint features are acquired by training with entire fingerprints or by aggregating local descriptors. The former method does not make full use of the information of matched minutiae, thereby achieving relatively-low performance. While the latter way needs to extract all local features, which is time-consuming. In this paper, we propose an efficient strategy to learn global features making full use of the information of matched minutiae. We train a fully convolutional network (FCN) with local patches. Patch classes contain more information than the original fingerprint classes, and such information is helpful to learn discriminative features. In the indexing stage, we utilize the capability of FCN to get global features of whole fingerprints. Furthermore, the learned features are robust to translation, rotation, and occlusion. Therefore, we do not need to align fingerprints. The proposed approach outperforms the state-of-the-art on benchmark datasets. We achieve 99.83% identification accuracy at the penetration rate of 1% using only 256-bytes per fingerprint on NIST SD4.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115400105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Permanence of ECG Biometric: Experiments Using Convolutional Neural Networks 基于卷积神经网络的心电生物特征持久性实验
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987383
Abhishek Ranjan
ECG biometric has emerged as an appealing biometric primarily because it is difficult to spoof. Because ECG is a continuous measure of an electrophysiological signal, it is difficult to mimic, but at the same time, its day-to-day variations impact its permanence. In this paper, we present a study of the permanence of ECG biometric using a Convolutional Neural Network based authentication system and a multi-session ECG dataset collected from 800 users. The authentication system achieved an equal error rate of 2% on ECG-ID database, improving the state-of-the-art. Using this system, we designed a series of rigorous experiments by varying the days elapsed between when enrollment and authentication are performed. The results show that, despite controlling for posture, equal error rate increases as days pass. Simply including more data to enrollment does improve the accuracy, but more recent data are significantly more advantageous.
心电生物识别技术之所以成为一种有吸引力的生物识别技术,主要是因为它难以被欺骗。由于ECG是电生理信号的连续测量,因此很难模拟,但同时,其日常变化会影响其持久性。在本文中,我们使用基于卷积神经网络的认证系统和从800个用户收集的多会话心电数据集来研究心电生物识别的持久性。该认证系统在ECG-ID数据库上的错误率为2%,提高了技术水平。使用这个系统,我们通过改变注册和身份验证之间的间隔天数,设计了一系列严格的实验。结果表明,尽管控制姿势,相等错误率随着时间的推移而增加。简单地包括更多的数据登记确实提高了准确性,但更近期的数据明显更有利。
{"title":"Permanence of ECG Biometric: Experiments Using Convolutional Neural Networks","authors":"Abhishek Ranjan","doi":"10.1109/ICB45273.2019.8987383","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987383","url":null,"abstract":"ECG biometric has emerged as an appealing biometric primarily because it is difficult to spoof. Because ECG is a continuous measure of an electrophysiological signal, it is difficult to mimic, but at the same time, its day-to-day variations impact its permanence. In this paper, we present a study of the permanence of ECG biometric using a Convolutional Neural Network based authentication system and a multi-session ECG dataset collected from 800 users. The authentication system achieved an equal error rate of 2% on ECG-ID database, improving the state-of-the-art. Using this system, we designed a series of rigorous experiments by varying the days elapsed between when enrollment and authentication are performed. The results show that, despite controlling for posture, equal error rate increases as days pass. Simply including more data to enrollment does improve the accuracy, but more recent data are significantly more advantageous.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122937776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Improving Cross-database Face Presentation Attack Detection via Adversarial Domain Adaptation 基于对抗域自适应的跨数据库人脸表示攻击检测改进
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987254
Guoqing Wang, Hu Han, S. Shan, Xilin Chen
Face recognition (FR) is being widely used in many applications from access control to smartphone unlock. As a result, face presentation attack detection (PAD) has drawn increasing attentions to secure the FR systems. Traditional approaches for PAD mainly assume that training and testing scenarios are similar in imaging conditions (illumination, scene, camera sensor, etc.), and thus may lack good generalization capability into new application scenarios. In this work, we propose an end-to-end learning approach to improve PAD generalization capability by utilizing prior knowledge from source domain via adversarial domain adaptation. We first build a source domain PAD model optimized with triplet loss. Subsequently, we perform adversarial domain adaptation w.r.t. the target domain to learn a shared embedding space by both the source and target domain models, in which the discriminator cannot reliably predict whether a sample is from the source or target domain. Finally, PAD in the target domain is performed with k-nearest neighbors (k-NN) classifier in the embedding space. The proposed approach shows promising generalization capability in a number of public-domain face PAD databases.
人脸识别技术正被广泛应用于从门禁到智能手机解锁等诸多领域。因此,人脸呈现攻击检测技术(PAD)越来越受到人们的关注,以保证人脸识别系统的安全。传统的PAD方法主要假设训练场景和测试场景在成像条件(照明、场景、相机传感器等)上相似,因此可能缺乏很好的泛化能力。在这项工作中,我们提出了一种端到端学习方法,通过对抗性领域自适应利用源领域的先验知识来提高PAD泛化能力。我们首先建立了一个考虑三重态损耗的源域PAD模型。随后,我们利用源域和目标域模型对目标域进行对抗性域自适应学习共享嵌入空间,其中鉴别器不能可靠地预测样本是来自源域还是来自目标域。最后,利用嵌入空间中的k-近邻(k-NN)分类器在目标域中进行PAD。该方法在多个公共领域人脸识别数据库中显示出良好的泛化能力。
{"title":"Improving Cross-database Face Presentation Attack Detection via Adversarial Domain Adaptation","authors":"Guoqing Wang, Hu Han, S. Shan, Xilin Chen","doi":"10.1109/ICB45273.2019.8987254","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987254","url":null,"abstract":"Face recognition (FR) is being widely used in many applications from access control to smartphone unlock. As a result, face presentation attack detection (PAD) has drawn increasing attentions to secure the FR systems. Traditional approaches for PAD mainly assume that training and testing scenarios are similar in imaging conditions (illumination, scene, camera sensor, etc.), and thus may lack good generalization capability into new application scenarios. In this work, we propose an end-to-end learning approach to improve PAD generalization capability by utilizing prior knowledge from source domain via adversarial domain adaptation. We first build a source domain PAD model optimized with triplet loss. Subsequently, we perform adversarial domain adaptation w.r.t. the target domain to learn a shared embedding space by both the source and target domain models, in which the discriminator cannot reliably predict whether a sample is from the source or target domain. Finally, PAD in the target domain is performed with k-nearest neighbors (k-NN) classifier in the embedding space. The proposed approach shows promising generalization capability in a number of public-domain face PAD databases.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125042807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Regressing 3D Face Shapes from Arbitrary Image Sets with Disentanglement in Shape Space 基于形状空间解纠缠的任意图像集三维人脸形状回归
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987234
W. Tian, Feng Liu, Qijun Zhao
Existing methods for reconstructing 3D faces from multiple unconstrained images mainly focus on generating a canonical identity shape. This paper instead aims to optimize both the identity shape and the deformed shapes unique to individual images. To this end, we disentangle 3D face shapes into identity and residual components and leverage facial landmarks on the 2D images to regress both component shapes in shape space directly. Compared with existing methods, our method reconstructs more personal-ized and visually appealing 3D face shapes thanks to its ability to effectively explore both common and different shape characteristics among the multiple images and to cope with various shape deformation that is not limited to expression changes. Quantitative evaluation shows that our method achieves lower reconstruction errors than state-of-the-art methods.
现有的从多幅无约束图像重构三维人脸的方法主要集中在生成一个规范的恒等形状。相反,本文旨在优化个体图像的身份形状和独特的变形形状。为此,我们将三维人脸形状分解为身份和残差分量,并利用二维图像上的面部地标直接在形状空间中回归这两个分量形状。与现有方法相比,我们的方法能够有效地挖掘多幅图像中共同的和不同的形状特征,并能够应对不限于表情变化的各种形状变形,从而重建出更个性化和更具视觉吸引力的三维人脸形状。定量评估表明,我们的方法比最先进的方法实现更低的重建误差。
{"title":"Regressing 3D Face Shapes from Arbitrary Image Sets with Disentanglement in Shape Space","authors":"W. Tian, Feng Liu, Qijun Zhao","doi":"10.1109/ICB45273.2019.8987234","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987234","url":null,"abstract":"Existing methods for reconstructing 3D faces from multiple unconstrained images mainly focus on generating a canonical identity shape. This paper instead aims to optimize both the identity shape and the deformed shapes unique to individual images. To this end, we disentangle 3D face shapes into identity and residual components and leverage facial landmarks on the 2D images to regress both component shapes in shape space directly. Compared with existing methods, our method reconstructs more personal-ized and visually appealing 3D face shapes thanks to its ability to effectively explore both common and different shape characteristics among the multiple images and to cope with various shape deformation that is not limited to expression changes. Quantitative evaluation shows that our method achieves lower reconstruction errors than state-of-the-art methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123432494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1