首页 > 最新文献

2019 International Conference on Biometrics (ICB)最新文献

英文 中文
Gait Recognition from Markerless 3D Motion Capture 基于无标记3D动作捕捉的步态识别
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987318
James Rainey, John D. Bustard, S. McLoone
State of the art gait recognition methods often make use of the shape of the body as well as its movement, as observed in the use of Gait Energy Images(GEIs), for recognition. However, it is desirable to have a method that works exclusively with the movement of the body, as clothing and other factors may interfere with the biometric signature from body shapes. Recent advances in markerless motion capture enable full 3D body poses to be estimated from unconstrained video sources. This paper describes how one such technique can be used to provide improved performance for verification tests.The markerless motion capture algorithm fits the 3D SMPL body model to a 2D image. Joint rotations from a single cycle are extracted from the model and matched using a verification system trained using an automated machine learning system, auto-sklearn. Evaluations of the method were performed on the CASIA-B gait dataset, and results show competitive verification performance with an Equal Error Rate of 18.40%.
最先进的步态识别方法通常利用身体的形状以及它的运动,正如在使用步态能量图像(GEIs)中观察到的那样,用于识别。然而,由于服装和其他因素可能会干扰来自体型的生物特征,因此希望有一种方法专门用于身体的运动。在无标记动作捕捉的最新进展,使全3D身体姿势的估计,从无限制的视频源。本文描述了如何使用这样一种技术为验证测试提供改进的性能。无标记运动捕捉算法将三维SMPL身体模型拟合到二维图像中。从模型中提取单个周期的关节旋转,并使用使用自动机器学习系统auto-sklearn训练的验证系统进行匹配。在CASIA-B步态数据集上对该方法进行了评估,结果显示出具有竞争力的验证性能,平均错误率为18.40%。
{"title":"Gait Recognition from Markerless 3D Motion Capture","authors":"James Rainey, John D. Bustard, S. McLoone","doi":"10.1109/ICB45273.2019.8987318","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987318","url":null,"abstract":"State of the art gait recognition methods often make use of the shape of the body as well as its movement, as observed in the use of Gait Energy Images(GEIs), for recognition. However, it is desirable to have a method that works exclusively with the movement of the body, as clothing and other factors may interfere with the biometric signature from body shapes. Recent advances in markerless motion capture enable full 3D body poses to be estimated from unconstrained video sources. This paper describes how one such technique can be used to provide improved performance for verification tests.The markerless motion capture algorithm fits the 3D SMPL body model to a 2D image. Joint rotations from a single cycle are extracted from the model and matched using a verification system trained using an automated machine learning system, auto-sklearn. Evaluations of the method were performed on the CASIA-B gait dataset, and results show competitive verification performance with an Equal Error Rate of 18.40%.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125370288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Making the most of what you have! Profiling biometric authentication on mobile devices 充分利用你所拥有的!分析移动设备上的生物识别认证
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987402
Sanka Rasnayaka, Sanjay Saha, T. Sim
In order to provide the additional security required by modern mobile devices, biometric methods and Continuous Authentication(CA) systems are getting popular. Most existing work on CA are concerned about achieving higher accuracy or fusing multiple modalities. However, in a mobile environment there are more constraints on the resources available. This work is the first to compare between different biometric modalities based on the resources they use. We do this by determining the Resource Profile Curve (RPC) for each modality. This Curve reveals the trade-off between authentication accuracy and resource usage, and is helpful for different usage scenarios in which a CA system needs to operate. In particular, we explain how a CA system can intelligently switch between RPCs to conserve battery power, memory usage, or to maximize authentication accuracy. We argue that RPCs ought to guide the development of practical CA systems.
为了提供现代移动设备所需的额外安全性,生物识别方法和持续身份验证(CA)系统越来越受欢迎。大多数现有的CA工作都关注于实现更高的精度或融合多模态。然而,在移动环境中,可用资源有更多的限制。这项工作是第一个比较不同的生物识别模式基于他们使用的资源。我们通过确定每种模式的资源概要曲线(RPC)来实现这一点。这条曲线揭示了身份验证准确性和资源使用之间的权衡,对于需要运行CA系统的不同使用场景很有帮助。特别是,我们解释了CA系统如何在rpc之间智能切换,以节省电池电量、内存使用或最大化身份验证准确性。我们认为rpc应该指导实际CA系统的开发。
{"title":"Making the most of what you have! Profiling biometric authentication on mobile devices","authors":"Sanka Rasnayaka, Sanjay Saha, T. Sim","doi":"10.1109/ICB45273.2019.8987402","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987402","url":null,"abstract":"In order to provide the additional security required by modern mobile devices, biometric methods and Continuous Authentication(CA) systems are getting popular. Most existing work on CA are concerned about achieving higher accuracy or fusing multiple modalities. However, in a mobile environment there are more constraints on the resources available. This work is the first to compare between different biometric modalities based on the resources they use. We do this by determining the Resource Profile Curve (RPC) for each modality. This Curve reveals the trade-off between authentication accuracy and resource usage, and is helpful for different usage scenarios in which a CA system needs to operate. In particular, we explain how a CA system can intelligently switch between RPCs to conserve battery power, memory usage, or to maximize authentication accuracy. We argue that RPCs ought to guide the development of practical CA systems.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122438609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Dense Fingerprint Registration via Displacement Regression Network 基于位移回归网络的密集指纹配准
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987300
Zhe Cui, Jianjiang Feng, Jie Zhou
Dense registration of fingerprints provides pixel-wise correspondences between two fingerprints, which is beneficial for fingerprint mosaicking and matching. However, this problem is very challenging due to large distortion, low fingerprint quality and lack of distinctive features. The performance of existing dense registration approaches, such as image correlation and phase demodulation, are limited by manually designed features and similarity measures. To overcome the limitations of these approaches, we propose a dense fingerprint registration algorithm through convolutional neural network. The key component is a displacement regression network (DRN) that can regress pixel-wise displacement field directly from coarsely aligned fingerprint images. Training ground-truth data is automatically generated by an existing dense registration algorithm without tedious manual labelling. We also propose a multi-scale matching score fusion method to show the application of the proposed registration algorithm in improving fingerprint matching accuracy. Experimental results on FVC2004 DB1_A and DB2_A, and Tsinghua Distorted Fingerprint (TDF) database show that our method reaches state-of-the-art registration performances.
指纹密集配准提供了两个指纹之间逐像素的对应关系,有利于指纹拼接和匹配。然而,由于指纹失真大,指纹质量低,缺乏鲜明的特征,这一问题非常具有挑战性。现有的密集配准方法(如图像相关和相位解调)的性能受到人工设计特征和相似度量的限制。为了克服这些方法的局限性,我们提出了一种基于卷积神经网络的密集指纹配准算法。该算法的关键部分是位移回归网络(DRN),该网络可以直接从粗糙排列的指纹图像中回归到逐像素的位移场。训练真值数据由现有的密集配准算法自动生成,无需繁琐的人工标记。我们还提出了一种多尺度匹配分数融合方法,以展示所提出的配准算法在提高指纹匹配精度方面的应用。在FVC2004 DB1_A和DB2_A以及清华扭曲指纹(TDF)数据库上的实验结果表明,我们的方法达到了最先进的配准性能。
{"title":"Dense Fingerprint Registration via Displacement Regression Network","authors":"Zhe Cui, Jianjiang Feng, Jie Zhou","doi":"10.1109/ICB45273.2019.8987300","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987300","url":null,"abstract":"Dense registration of fingerprints provides pixel-wise correspondences between two fingerprints, which is beneficial for fingerprint mosaicking and matching. However, this problem is very challenging due to large distortion, low fingerprint quality and lack of distinctive features. The performance of existing dense registration approaches, such as image correlation and phase demodulation, are limited by manually designed features and similarity measures. To overcome the limitations of these approaches, we propose a dense fingerprint registration algorithm through convolutional neural network. The key component is a displacement regression network (DRN) that can regress pixel-wise displacement field directly from coarsely aligned fingerprint images. Training ground-truth data is automatically generated by an existing dense registration algorithm without tedious manual labelling. We also propose a multi-scale matching score fusion method to show the application of the proposed registration algorithm in improving fingerprint matching accuracy. Experimental results on FVC2004 DB1_A and DB2_A, and Tsinghua Distorted Fingerprint (TDF) database show that our method reaches state-of-the-art registration performances.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125574213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multibiometrics User Recognition using Adaptive Cohort Ranking 基于自适应队列排序的多生物特征用户识别
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987269
A. Anand, Amioy Kumar, Ajay Kumar
Personal identification using multibiometrics is desirable in a wide range of high-security and/or forensic application as it can address performance limitations from unimodal biometrics systems. This paper presents a new scheme the multibiometrics fusion to achieve performance improvement for the user identification/recognition. We model the biometric identification solution using an adaptive cohort ranking approach, which can more effectively utilize the cohort information for maximizing the true positive identification rates. In contrast to the tradition cohort-based methods, the proposed cohort ranking approach offers merit of being matcher independence as it does not make any assumption on the nature of score distributions from any of the biometric matcher(s). In addition, our scheme is adaptive and can be incorporated for any biometric matcher/technologies. The proposed approach is evaluated on publicly available unimodal and multimodal biometrics databases, i.e., BSSR1 multimodal matching scores for fingerprint and face matchers and XM2VTS matching scores from synchronize databases of face and voice. In both the unimodal and multimodal databases, our results indicate that the proposed approach can outperform the conventional adaptive identification approaches. The experimental results from both public databases are quite promising and validate the contributions from this work.
使用多重生物识别技术的个人识别在广泛的高安全性和/或法医应用中是可取的,因为它可以解决单峰生物识别系统的性能限制。为了提高用户识别的性能,本文提出了一种新的多生物特征融合方案。我们使用自适应队列排序方法对生物识别解决方案进行建模,该方法可以更有效地利用队列信息来最大化真阳性识别率。与传统的基于队列的方法相比,所提出的队列排序方法具有匹配独立的优点,因为它没有对任何生物特征匹配器的分数分布的性质做出任何假设。此外,我们的方案是自适应的,可以纳入任何生物识别匹配/技术。该方法在公开的单模态和多模态生物特征数据库上进行了评估,即指纹和面部匹配器的BSSR1多模态匹配分数和面部和语音同步数据库的XM2VTS匹配分数。在单模态和多模态数据库中,我们的结果表明,该方法优于传统的自适应识别方法。两个公共数据库的实验结果都很有希望,并验证了本工作的贡献。
{"title":"Multibiometrics User Recognition using Adaptive Cohort Ranking","authors":"A. Anand, Amioy Kumar, Ajay Kumar","doi":"10.1109/ICB45273.2019.8987269","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987269","url":null,"abstract":"Personal identification using multibiometrics is desirable in a wide range of high-security and/or forensic application as it can address performance limitations from unimodal biometrics systems. This paper presents a new scheme the multibiometrics fusion to achieve performance improvement for the user identification/recognition. We model the biometric identification solution using an adaptive cohort ranking approach, which can more effectively utilize the cohort information for maximizing the true positive identification rates. In contrast to the tradition cohort-based methods, the proposed cohort ranking approach offers merit of being matcher independence as it does not make any assumption on the nature of score distributions from any of the biometric matcher(s). In addition, our scheme is adaptive and can be incorporated for any biometric matcher/technologies. The proposed approach is evaluated on publicly available unimodal and multimodal biometrics databases, i.e., BSSR1 multimodal matching scores for fingerprint and face matchers and XM2VTS matching scores from synchronize databases of face and voice. In both the unimodal and multimodal databases, our results indicate that the proposed approach can outperform the conventional adaptive identification approaches. The experimental results from both public databases are quite promising and validate the contributions from this work.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132850149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gesture-based User Identity Verification as an Open Set Problem for Smartphones 基于手势的智能手机用户身份验证的开放集问题
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987373
Kálmán Tornai, W. Scheirer
The most straightforward, yet insecure, methods of authenticating a person on smartphones derive from the solutions applied to personal computers or smart cards, namely the authorization by passwords or numeric codes. Alarmingly, the widespread use of smartphone platforms implies that people are carrying around sensitive information in their pocket, making the information more available physically. As smartphone owners are often using their devices in public areas, these short numeric codes or other forms of passwords can be obtained quickly through shoulder surfing, resulting in making that restricted data far more accessible for those who are not authorized to access the device. In this paper, we address the problem of biometric verifi-cation on smartphones. We propose a new approach for gesture-based verification that makes use of open set recognition algorithms. Further, we introduce a new database of inertial measurements to investigate the user identification capabilities of this approach. The results we have obtained indicate that this approach is a feasible solution, although the precision of the method depends highly on the chosen samples of the training set.
在智能手机上对个人进行身份验证的最直接、但也最不安全的方法来自于应用于个人电脑或智能卡的解决方案,即通过密码或数字代码进行授权。令人担忧的是,智能手机平台的广泛使用意味着人们正在把敏感信息放在口袋里,使这些信息更容易在物理上获得。由于智能手机用户经常在公共场所使用他们的设备,这些简短的数字代码或其他形式的密码可以通过肩部冲浪快速获得,导致那些未经授权访问设备的人更容易访问受限制的数据。在本文中,我们解决了智能手机上的生物识别验证问题。我们提出了一种新的基于手势的验证方法,该方法利用开放集识别算法。此外,我们引入了一个新的惯性测量数据库来研究这种方法的用户识别能力。我们得到的结果表明,这种方法是一种可行的解决方案,尽管该方法的精度高度依赖于训练集的选择样本。
{"title":"Gesture-based User Identity Verification as an Open Set Problem for Smartphones","authors":"Kálmán Tornai, W. Scheirer","doi":"10.1109/ICB45273.2019.8987373","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987373","url":null,"abstract":"The most straightforward, yet insecure, methods of authenticating a person on smartphones derive from the solutions applied to personal computers or smart cards, namely the authorization by passwords or numeric codes. Alarmingly, the widespread use of smartphone platforms implies that people are carrying around sensitive information in their pocket, making the information more available physically. As smartphone owners are often using their devices in public areas, these short numeric codes or other forms of passwords can be obtained quickly through shoulder surfing, resulting in making that restricted data far more accessible for those who are not authorized to access the device. In this paper, we address the problem of biometric verifi-cation on smartphones. We propose a new approach for gesture-based verification that makes use of open set recognition algorithms. Further, we introduce a new database of inertial measurements to investigate the user identification capabilities of this approach. The results we have obtained indicate that this approach is a feasible solution, although the precision of the method depends highly on the chosen samples of the training set.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133165153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Deep Pixel-wise Binary Supervision for Face Presentation Attack Detection 基于深度像素的二值监督人脸呈现攻击检测
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987370
Anjith George, S. Marcel
Face recognition has evolved as a prominent biometric authentication modality. However, vulnerability to presentation attacks curtails its reliable deployment. Automatic detection of presentation attacks is essential for secure use of face recognition technology in unattended scenarios. In this work, we introduce a Convolutional Neural Network (CNN) based framework for presentation attack detection, with deep pixel-wise supervision. The framework uses only frame level information making it suitable for deployment in smart devices with minimal computational and time overhead. We demonstrate the effectiveness of the proposed approach in public datasets for both intra as well as cross-dataset experiments. The proposed approach achieves an HTER of 0% in Replay Mobile dataset and an ACER of 0.42% in Protocol-1 of OULU dataset outperforming state of the art methods.
人脸识别已经发展成为一种突出的生物识别认证方式。然而,对表示攻击的脆弱性限制了它的可靠部署。自动检测表示攻击对于在无人值守的情况下安全使用人脸识别技术至关重要。在这项工作中,我们引入了一个基于卷积神经网络(CNN)的框架,用于表示攻击检测,并具有深度像素级监督。该框架仅使用帧级信息,使其适合部署在智能设备中,具有最小的计算和时间开销。我们在公共数据集内和跨数据集实验中证明了所提出方法的有效性。该方法在Replay Mobile数据集上的HTER为0%,在OULU数据集的Protocol-1上的ACER为0.42%,优于现有方法。
{"title":"Deep Pixel-wise Binary Supervision for Face Presentation Attack Detection","authors":"Anjith George, S. Marcel","doi":"10.1109/ICB45273.2019.8987370","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987370","url":null,"abstract":"Face recognition has evolved as a prominent biometric authentication modality. However, vulnerability to presentation attacks curtails its reliable deployment. Automatic detection of presentation attacks is essential for secure use of face recognition technology in unattended scenarios. In this work, we introduce a Convolutional Neural Network (CNN) based framework for presentation attack detection, with deep pixel-wise supervision. The framework uses only frame level information making it suitable for deployment in smart devices with minimal computational and time overhead. We demonstrate the effectiveness of the proposed approach in public datasets for both intra as well as cross-dataset experiments. The proposed approach achieves an HTER of 0% in Replay Mobile dataset and an ACER of 0.42% in Protocol-1 of OULU dataset outperforming state of the art methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115630600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 149
Vulnerability assessment and detection of Deepfake videos Deepfake视频的漏洞评估与检测
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987375
Pavel Korshunov, S. Marcel
It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found the best performing method based on visual quality metrics, which is often used in presentation attack detection domain, to lead to 8.97% equal error rate on high quality Deep-fakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.
通过使用预训练的生成对抗网络(GAN),将视频中一个人的脸自动替换为另一个人的脸变得越来越容易。最近的公共丑闻,例如名人的面孔被交换到色情视频中,需要自动检测这些Deepfake视频的方法。为了帮助开发此类方法,在本文中,我们展示了从VidTIMIT数据库的视频中生成的第一个公开可用的Deepfake视频集。我们使用基于gan的开源软件来创建Deepfakes,我们强调训练和混合参数可以显著影响最终视频的质量。为了演示这种影响,我们使用不同的调优参数集生成了高质量和低质量的视频(各320个视频)。我们表明,基于VGG和Facenet神经网络的最先进的人脸识别系统很容易受到深度伪造视频的攻击,错误接受率分别为85.62%和95.00%(在高质量版本上),这意味着检测深度伪造视频的方法是必要的。通过考虑几种基线方法,我们发现了基于视觉质量指标的最佳性能方法,该方法通常用于表示攻击检测领域,对高质量的Deep-fakes产生8.97%的相等错误率。我们的实验表明,gan生成的Deepfake视频对人脸识别系统和现有的检测方法都是具有挑战性的,人脸交换技术的进一步发展将使其更加困难。
{"title":"Vulnerability assessment and detection of Deepfake videos","authors":"Pavel Korshunov, S. Marcel","doi":"10.1109/ICB45273.2019.8987375","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987375","url":null,"abstract":"It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found the best performing method based on visual quality metrics, which is often used in presentation attack detection domain, to lead to 8.97% equal error rate on high quality Deep-fakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120954324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Attribute-Guided Deep Polarimetric Thermal-to-visible Face Recognition 属性导向的深度极化热可见人脸识别
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987416
S. M. Iranmanesh, N. Nasrabadi
In this paper, we present an attribute-guided deep coupled learning framework to address the problem of matching polarimetric thermal face photos against a gallery of visible faces. The coupled framework contains two sub-networks, one dedicated to the visible spectrum and the second sub-network dedicated to the polarimetric thermal spectrum. Each sub-network is made of a generative adversarial network (GAN) architecture. We propose a novel Attribute-Guided Coupled Generative Adversarial Network (AGC-GAN) architecture which utilizes facial attributes to improve the thermal-to-visible face recognition performance. The proposed AGC-GAN exploits the facial attributes and leverages multiple loss functions in order to learn rich discriminative features in a common embedding subspace. To achieve a realistic photo reconstruction while preserving the discriminative information, we also add a perceptual loss term to the coupling loss function. An ablation study is performed to show the effectiveness of different loss functions for optimizing the proposed method. Moreover, the superiority of the model compared to the state-ofthe-art models is demonstrated using polarimetric dataset.
在本文中,我们提出了一个属性导向的深度耦合学习框架来解决极化热人脸照片与可见人脸库的匹配问题。耦合框架包含两个子网络,一个专用于可见光谱,另一个专用于极化热光谱。每个子网络由生成对抗网络(GAN)架构组成。我们提出了一种新的属性导向耦合生成对抗网络(AGC-GAN)架构,该架构利用人脸属性来提高热可见人脸识别性能。本文提出的AGC-GAN利用人脸属性,并利用多个损失函数在公共嵌入子空间中学习丰富的判别特征。为了在保留判别信息的同时实现真实的照片重建,我们还在耦合损失函数中添加了感知损失项。通过烧蚀实验证明了不同损失函数对优化方法的有效性。此外,利用极化数据集证明了该模型与目前最先进模型相比的优越性。
{"title":"Attribute-Guided Deep Polarimetric Thermal-to-visible Face Recognition","authors":"S. M. Iranmanesh, N. Nasrabadi","doi":"10.1109/ICB45273.2019.8987416","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987416","url":null,"abstract":"In this paper, we present an attribute-guided deep coupled learning framework to address the problem of matching polarimetric thermal face photos against a gallery of visible faces. The coupled framework contains two sub-networks, one dedicated to the visible spectrum and the second sub-network dedicated to the polarimetric thermal spectrum. Each sub-network is made of a generative adversarial network (GAN) architecture. We propose a novel Attribute-Guided Coupled Generative Adversarial Network (AGC-GAN) architecture which utilizes facial attributes to improve the thermal-to-visible face recognition performance. The proposed AGC-GAN exploits the facial attributes and leverages multiple loss functions in order to learn rich discriminative features in a common embedding subspace. To achieve a realistic photo reconstruction while preserving the discriminative information, we also add a perceptual loss term to the coupling loss function. An ablation study is performed to show the effectiveness of different loss functions for optimizing the proposed method. Moreover, the superiority of the model compared to the state-ofthe-art models is demonstrated using polarimetric dataset.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126056536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Latent Fingerprint Enhancement Based on DenseUNet 基于DenseUNet的潜在指纹增强
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987279
Peng Qian, Aojie Li, Manhua Liu
The image quality of latent fingerprints is usually poor with unclear ridge structure and various overlapping patterns. Enhancement is an important processing step to reduce the noise, recover the corrupted regions and improve the clarity of ridge structure for more accurate fingerprint recognition. Existing fingerprint enhancement methods cannot achieve good performance for latent fingerprints. In this paper, we propose a latent fingerprint enhancement method based on DenseUNet. First, to generate the training data, the high-quality fingerprints are overlapped with the structured noises. Then, a deep DenseUNet is constructed to transform the low-quality fingerprint image into the high-quality fingerprint image by pixels-to-pixels and end- to-end training. Finally, the whole latent fingerprint is iteratively enhanced with the DenseUNet model to achieve the image quality requirement. Experiment results and comparison on NIST SD27 latent fingerprint database are presented to show the promising performance of the proposed algorithm.
潜在指纹的图像质量通常较差,脊状结构不清晰,重叠图案多种多样。增强是降低噪声、恢复损坏区域和提高脊结构清晰度以实现更准确指纹识别的重要处理步骤。现有的指纹增强方法对潜在指纹的增强效果不理想。本文提出了一种基于DenseUNet的潜在指纹增强方法。首先,将高质量指纹与结构化噪声叠加,生成训练数据;然后,通过像素对像素和端对端训练,构建深度DenseUNet,将低质量指纹图像转化为高质量指纹图像。最后,利用DenseUNet模型对整个潜在指纹进行迭代增强,达到图像质量要求。实验结果和在NIST SD27潜在指纹数据库上的对比表明了该算法的良好性能。
{"title":"Latent Fingerprint Enhancement Based on DenseUNet","authors":"Peng Qian, Aojie Li, Manhua Liu","doi":"10.1109/ICB45273.2019.8987279","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987279","url":null,"abstract":"The image quality of latent fingerprints is usually poor with unclear ridge structure and various overlapping patterns. Enhancement is an important processing step to reduce the noise, recover the corrupted regions and improve the clarity of ridge structure for more accurate fingerprint recognition. Existing fingerprint enhancement methods cannot achieve good performance for latent fingerprints. In this paper, we propose a latent fingerprint enhancement method based on DenseUNet. First, to generate the training data, the high-quality fingerprints are overlapped with the structured noises. Then, a deep DenseUNet is constructed to transform the low-quality fingerprint image into the high-quality fingerprint image by pixels-to-pixels and end- to-end training. Finally, the whole latent fingerprint is iteratively enhanced with the DenseUNet model to achieve the image quality requirement. Experiment results and comparison on NIST SD27 latent fingerprint database are presented to show the promising performance of the proposed algorithm.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125840415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Iris Feature Extraction and Matching Method for Mobile Biometric Applications 移动生物识别应用中的虹膜特征提取与匹配方法
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987379
G. Odinokikh, M. Korobkin, I. Solomatin, I. Efimov, A. Fartukov
Biometric methods are increasingly penetrating the field of mobile applications, confronting researchers with a huge number of problems that have not been considered before. Many different interaction scenarios in conjunction with the mobile device performance limitations challenge the capabilities of on-board biometrics. Saturated with complex textural features the iris image is used as a source for the extraction of unique features of the individual that are used for recognition. The mentioned features inherent to the interaction with the mobile device affect not only the source image quality but natural deformations of the iris leading to high intra-class variations and hence reducing the recognition performance. A novel method for iris feature extraction and matching is represented in this work. It is based on a lightweight CNN model combining the advantages of a classic approach and advanced deep learning techniques. The model utilizes shallow and deep feature representations in combination with characteristics describing the environment that helps to reduce intra-class variations and as a consequence the recognition errors. It showed high efficiency on the mobile and a few more datasets outperforming state-of-the-art methods by far.
生物识别技术越来越多地渗透到移动应用领域,给研究人员带来了大量以前没有考虑到的问题。许多不同的交互场景与移动设备性能限制相结合,对机载生物识别技术的能力提出了挑战。虹膜图像饱和了复杂的纹理特征,作为提取个人独特特征的来源,用于识别。上述与移动设备交互所固有的特征不仅影响源图像质量,而且影响虹膜的自然变形,导致高类内变化,从而降低识别性能。提出了一种新的虹膜特征提取与匹配方法。它基于轻量级CNN模型,结合了经典方法和先进深度学习技术的优点。该模型将浅层和深层特征表示与描述环境的特征相结合,有助于减少类内变化,从而减少识别误差。它在移动设备上显示出高效率,并且有更多的数据集远远超过了最先进的方法。
{"title":"Iris Feature Extraction and Matching Method for Mobile Biometric Applications","authors":"G. Odinokikh, M. Korobkin, I. Solomatin, I. Efimov, A. Fartukov","doi":"10.1109/ICB45273.2019.8987379","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987379","url":null,"abstract":"Biometric methods are increasingly penetrating the field of mobile applications, confronting researchers with a huge number of problems that have not been considered before. Many different interaction scenarios in conjunction with the mobile device performance limitations challenge the capabilities of on-board biometrics. Saturated with complex textural features the iris image is used as a source for the extraction of unique features of the individual that are used for recognition. The mentioned features inherent to the interaction with the mobile device affect not only the source image quality but natural deformations of the iris leading to high intra-class variations and hence reducing the recognition performance. A novel method for iris feature extraction and matching is represented in this work. It is based on a lightweight CNN model combining the advantages of a classic approach and advanced deep learning techniques. The model utilizes shallow and deep feature representations in combination with characteristics describing the environment that helps to reduce intra-class variations and as a consequence the recognition errors. It showed high efficiency on the mobile and a few more datasets outperforming state-of-the-art methods by far.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125848914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2019 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1