首页 > 最新文献

IET Biometrics最新文献

英文 中文
Pixel-wise supervision for presentation attack detection on identity document cards 基于像素的身份证件卡表示攻击检测监督
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-27 DOI: 10.1049/bme2.12088
Raghavendra Mudgalgundurao, Patrick Schuch, Kiran Raja, Raghavendra Ramachandra, Naser Damer

Identity documents (or IDs) play an important role in verifying the identity of a person with wide applications in banks, travel, video-identification services and border controls. Replay or photocopied ID cards can be misused to pass ID control in unsupervised scenarios if the liveness of a person is not checked. To detect such presentation attacks on ID card verification process when presented virtually is a critical step for the biometric systems to assure authenticity. In this paper, a pixel-wise supervision on DenseNet is proposed to detect presentation attacks of the printed and digitally replayed attacks. The authors motivate the approach to use pixel-wise supervision to leverage minute cues on various artefacts such as moiré patterns and artefacts left by the printers. The baseline benchmark is presented using different handcrafted and deep learning models on a newly constructed in-house database obtained from an operational system consisting of 886 users with 433 bona fide, 67 print and 366 display attacks. It is demonstrated that the proposed approach achieves better performance compared to handcrafted features and Deep Models with an Equal Error Rate of 2.22% and Bona fide Presentation Classification Error Rate (BPCER) of 1.83% and 1.67% at Attack Presentation Classification Error Rate of 5% and 10%.

身份证件(或id)在验证个人身份方面发挥着重要作用,在银行、旅行、视频识别服务和边境管制中有着广泛的应用。在无人监督的情况下,如果不检查一个人的活动性,可能会滥用重放或影印的身份证来通过身份控制。在身份证虚拟呈现的验证过程中检测这种呈现攻击是生物识别系统保证身份证真实性的关键步骤。本文提出了一种基于像素的DenseNet监控方法,用于检测打印攻击和数字重播攻击的表示攻击。作者鼓励使用像素级监督的方法来利用各种人工制品上的微小线索,如莫尔纹图案和打印机留下的人工制品。基线基准使用不同的手工和深度学习模型在一个新构建的内部数据库上呈现,该数据库来自一个由886个用户组成的操作系统,其中包含433次真实攻击,67次打印攻击和366次显示攻击。结果表明,在攻击表现分类错误率为5%和10%时,该方法的误差率为2.22%,真实表现分类错误率(BPCER)为1.83%和1.67%,优于手工特征和深度模型。
{"title":"Pixel-wise supervision for presentation attack detection on identity document cards","authors":"Raghavendra Mudgalgundurao,&nbsp;Patrick Schuch,&nbsp;Kiran Raja,&nbsp;Raghavendra Ramachandra,&nbsp;Naser Damer","doi":"10.1049/bme2.12088","DOIUrl":"10.1049/bme2.12088","url":null,"abstract":"<p>Identity documents (or IDs) play an important role in verifying the identity of a person with wide applications in banks, travel, video-identification services and border controls. Replay or photocopied ID cards can be misused to pass ID control in unsupervised scenarios if the liveness of a person is not checked. To detect such presentation attacks on ID card verification process when presented virtually is a critical step for the biometric systems to assure authenticity. In this paper, a pixel-wise supervision on DenseNet is proposed to detect presentation attacks of the printed and digitally replayed attacks. The authors motivate the approach to use pixel-wise supervision to leverage minute cues on various artefacts such as moiré patterns and artefacts left by the printers. The baseline benchmark is presented using different handcrafted and deep learning models on a newly constructed in-house database obtained from an operational system consisting of 886 users with 433 bona fide, 67 print and 366 display attacks. It is demonstrated that the proposed approach achieves better performance compared to handcrafted features and Deep Models with an Equal Error Rate of 2.22% and Bona fide Presentation Classification Error Rate (BPCER) of 1.83% and 1.67% at Attack Presentation Classification Error Rate of 5% and 10%.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"383-395"},"PeriodicalIF":2.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12088","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72496186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation 性别分类对抗性攻击对人脸识别的可转移性分析:固定和可变攻击扰动
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-27 DOI: 10.1049/bme2.12082
Zohra Rezgui, Amina Bassit, Raymond Veldhuis

Most deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility.

大多数基于深度学习的图像分类模型容易受到对抗性攻击,这种攻击会给输入图像引入难以察觉的变化,从而导致模型错误分类。已经证明,这些针对特定模型的攻击在执行相同任务的模型之间是可转移的。然而,执行不同任务但共享相同输入空间和模型架构的模型在文献中提出的可转移性场景中从未被考虑过。本文在基于vgg16和基于resnet50的生物特征分类器的背景下对这一现象进行了分析。作者调查了两个白盒攻击对性别分类器的影响,并对比了一种防御方法作为对策。然后,使用攻击生成的对抗图像,以黑盒方式攻击预训练的人脸识别分类器。采用两种验证比较设置,对扰动大小相同和不同的图像进行比较。作者的结果表明,在固定扰动设置下,快速梯度符号方法攻击具有可转移性,而在像素引导去噪攻击设置下则具有不可转移性。对这种不可转移性的解释可以支持使用针对软生物识别分类器的快速和无训练的对抗性攻击,作为实现软生物识别隐私保护的手段,同时保持面部身份的实用性。
{"title":"Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation","authors":"Zohra Rezgui,&nbsp;Amina Bassit,&nbsp;Raymond Veldhuis","doi":"10.1049/bme2.12082","DOIUrl":"10.1049/bme2.12082","url":null,"abstract":"<p>Most deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"407-419"},"PeriodicalIF":2.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88686270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical analysis of keystroke dynamics in passwords: A longitudinal study 密码击键动力学的实证分析:一项纵向研究
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-27 DOI: 10.1049/bme2.12087
Simon Parkinson, Saad Khan, Alexandru-Mihai Badea, Andrew Crampton, Na Liu, Qing Xu

The use of keystroke timings as a behavioural biometric in fixed-text authentication mechanisms has been extensively studied. Previous research has investigated in isolation the effect of password length, character substitution, and participant repetition. These studies have used publicly available datasets, containing a small number of passwords with timings acquired from different experiments. Multiple experiments have also used the participant's first and last name as the password; however, this is not realistic of a password system. Not only is the user's name considered a weak password, but their familiarity with typing the phrase minimises variation in acquired samples as they become more familiar with the new password. Furthermore, no study has considered the combined impact of length, substitution, and repetition using the same participant pool. This is explored in this work, where the authors collected timings for 65 participants, when typing 40 passwords with varying characteristics, 4 times per week for 8 weeks. A total of 81,920 timing samples were processed using an instance-based distance and threshold matching approach. Results of this study provide empirical insight into how a password policy should be created to maximise the accuracy of the biometric system when considering substitution type and longitudinal effects.

在固定文本认证机制中,击键定时作为一种行为生物特征的使用已经得到了广泛的研究。先前的研究已经单独调查了密码长度、字符替换和参与者重复的影响。这些研究使用了公开的数据集,其中包含从不同实验中获得的少量密码和时间。多个实验还使用参与者的名字和姓氏作为密码;然而,这对于密码系统来说是不现实的。用户的名字不仅被认为是一个弱密码,而且随着他们对新密码的熟悉,他们对键入短语的熟悉程度将获得的样本中的变化降至最低。此外,没有任何研究考虑使用同一参与者库的长度、替代和重复的综合影响。这项工作对这一点进行了探索,作者收集了65名参与者在输入40个不同特征的密码时的时间安排,每周4次,持续8周。使用基于实例的距离和阈值匹配方法总共处理了81920个时序样本。这项研究的结果为在考虑替代类型和纵向影响时如何创建密码策略以最大限度地提高生物识别系统的准确性提供了经验见解。
{"title":"An empirical analysis of keystroke dynamics in passwords: A longitudinal study","authors":"Simon Parkinson,&nbsp;Saad Khan,&nbsp;Alexandru-Mihai Badea,&nbsp;Andrew Crampton,&nbsp;Na Liu,&nbsp;Qing Xu","doi":"10.1049/bme2.12087","DOIUrl":"https://doi.org/10.1049/bme2.12087","url":null,"abstract":"<p>The use of keystroke timings as a behavioural biometric in fixed-text authentication mechanisms has been extensively studied. Previous research has investigated in isolation the effect of password length, character substitution, and participant repetition. These studies have used publicly available datasets, containing a small number of passwords with timings acquired from different experiments. Multiple experiments have also used the participant's first and last name as the password; however, this is not realistic of a password system. Not only is the user's name considered a weak password, but their familiarity with typing the phrase minimises variation in acquired samples as they become more familiar with the new password. Furthermore, no study has considered the combined impact of length, substitution, and repetition using the same participant pool. This is explored in this work, where the authors collected timings for 65 participants, when typing 40 passwords with varying characteristics, 4 times per week for 8 weeks. A total of 81,920 timing samples were processed using an instance-based distance and threshold matching approach. Results of this study provide empirical insight into how a password policy should be created to maximise the accuracy of the biometric system when considering substitution type and longitudinal effects.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"25-37"},"PeriodicalIF":2.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50145548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hybrid biometric template protection: Resolving the agony of choice between bloom filters and homomorphic encryption 混合生物识别模板保护:解决在布隆过滤器和同态加密之间选择的痛苦
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-16 DOI: 10.1049/bme2.12075
Amina Bassit, Florian Hahn, Raymond Veldhuis, Andreas Peter

Bloom filters (BFs) and homomorphic encryption (HE) are prominent techniques used to design biometric template protection (BTP) schemes that aim to protect sensitive biometric information during storage and biometric comparison. However, the pros and cons of BF- and HE-based BTPs are not well studied in literature. We investigate the strengths and weaknesses of these two approaches since both seem promising from a theoretical viewpoint. Our key insight is to extend our theoretical investigation to cover the practical case of iris recognition on the ground that iris (1) benefits from the alignment-free property of BFs and (2) induces huge computational burdens when implemented in the HE-encrypted domain. BF-based BTPs can be implemented to be either fast with high recognition accuracy while missing the important privacy property of ‘unlinkability’, or to be fast with unlinkability-property while missing the high accuracy. HE-based BTPs, on the other hand, are highly secure, achieve good accuracy, and meet the unlinkability-property, but they are much slower than BF-based approaches. As a synthesis, we propose a hybrid BTP scheme that combines the good properties of BFs and HE, ensuring unlinkability and high recognition accuracy, while being about seven times faster than the traditional HE-based approach.

布隆过滤器(BFs)和同态加密(HE)是设计生物特征模板保护(BTP)方案的重要技术,其目的是在生物特征存储和比较过程中保护敏感的生物特征信息。然而,基于BF和he的BTPs的优缺点在文献中并没有得到很好的研究。我们研究了这两种方法的优缺点,因为从理论的角度来看,这两种方法都很有希望。我们的关键见解是将我们的理论研究扩展到涵盖虹膜识别的实际情况,因为虹膜(1)受益于BFs的无对齐特性,(2)在he加密域实现时会产生巨大的计算负担。基于bf的btp可以实现既快速又具有高识别精度,但缺少“不可链接性”这一重要的隐私属性,也可以实现具有不可链接性而缺少高准确性的快速。另一方面,基于he的btp安全性高,精度高,满足不可链接性,但速度比基于bf的方法慢得多。作为一种综合,我们提出了一种混合BTP方案,该方案结合了bf和HE的良好特性,保证了不链接性和高识别精度,同时比传统的基于HE的方法快7倍左右。
{"title":"Hybrid biometric template protection: Resolving the agony of choice between bloom filters and homomorphic encryption","authors":"Amina Bassit,&nbsp;Florian Hahn,&nbsp;Raymond Veldhuis,&nbsp;Andreas Peter","doi":"10.1049/bme2.12075","DOIUrl":"10.1049/bme2.12075","url":null,"abstract":"<p>Bloom filters (BFs) and homomorphic encryption (HE) are prominent techniques used to design biometric template protection (BTP) schemes that aim to protect sensitive biometric information during storage and biometric comparison. However, the pros and cons of BF- and HE-based BTPs are not well studied in literature. We investigate the strengths and weaknesses of these two approaches since both seem promising from a theoretical viewpoint. Our key insight is to extend our theoretical investigation to cover the practical case of iris recognition on the ground that iris (1) benefits from the alignment-free property of BFs and (2) induces huge computational burdens when implemented in the HE-encrypted domain. BF-based BTPs can be implemented to be either fast with high recognition accuracy while missing the important privacy property of ‘unlinkability’, or to be fast with unlinkability-property while missing the high accuracy. HE-based BTPs, on the other hand, are highly secure, achieve good accuracy, and meet the unlinkability-property, but they are much slower than BF-based approaches. As a synthesis, we propose a hybrid BTP scheme that combines the good properties of BFs and HE, ensuring unlinkability and high recognition accuracy, while being about seven times faster than the traditional HE-based approach.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"430-444"},"PeriodicalIF":2.0,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90056623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Point-convolution-based human skeletal pose estimation on millimetre wave frequency modulated continuous wave multiple-input multiple-output radar 基于点卷积的毫米波调频连续波多输入多输出雷达人体骨骼位姿估计
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-13 DOI: 10.1049/bme2.12081
Jinxiao Zhong, Liangnian Jin, Ran Wang

Compared with traditional approaches that used vision sensors which can provide a high-resolution representation of targets, millimetre-wave radar is robust to scene lighting and weather conditions, and has more applications. Current methods of human skeletal pose estimation can reconstruct targets, but they lose the spatial information or don't take the density of point cloud into consideration. We propose a skeletal pose estimation method that combines point convolution to extract features from the point cloud. By extracting the local information and density of each point in the point cloud of the target, the spatial location and structure information of the target can be obtained, and the accuracy of the pose estimation is increased. The extraction of point cloud features is based on point-by-point convolution, that is, different weights are applied to different features of each point, which also increases the nonlinear expression ability of the model. Experiments show that the proposed approach is effective. We offer more distinct skeletal joints and a lower mean absolute error, average localisation errors of 6.1 cm in X, 3.5 cm in Y and 3.3 cm in Z, respectively.

与传统的利用视觉传感器提供高分辨率目标表示的方法相比,毫米波雷达具有对场景光照和天气条件的鲁棒性,具有更广泛的应用前景。现有的人体骨骼姿态估计方法虽然可以重建目标,但缺乏空间信息或没有考虑点云的密度。提出了一种结合点卷积提取点云特征的骨骼姿态估计方法。通过提取目标点云中各点的局部信息和密度,获得目标的空间位置和结构信息,提高姿态估计的精度。点云特征的提取基于逐点卷积,即对每个点的不同特征施加不同的权值,这也增加了模型的非线性表达能力。实验表明,该方法是有效的。我们提供了更清晰的骨骼关节和更低的平均绝对误差,平均定位误差分别为X的6.1厘米,Y的3.5厘米和Z的3.3厘米。
{"title":"Point-convolution-based human skeletal pose estimation on millimetre wave frequency modulated continuous wave multiple-input multiple-output radar","authors":"Jinxiao Zhong,&nbsp;Liangnian Jin,&nbsp;Ran Wang","doi":"10.1049/bme2.12081","DOIUrl":"10.1049/bme2.12081","url":null,"abstract":"<p>Compared with traditional approaches that used vision sensors which can provide a high-resolution representation of targets, millimetre-wave radar is robust to scene lighting and weather conditions, and has more applications. Current methods of human skeletal pose estimation can reconstruct targets, but they lose the spatial information or don't take the density of point cloud into consideration. We propose a skeletal pose estimation method that combines point convolution to extract features from the point cloud. By extracting the local information and density of each point in the point cloud of the target, the spatial location and structure information of the target can be obtained, and the accuracy of the pose estimation is increased. The extraction of point cloud features is based on point-by-point convolution, that is, different weights are applied to different features of each point, which also increases the nonlinear expression ability of the model. Experiments show that the proposed approach is effective. We offer more distinct skeletal joints and a lower mean absolute error, average localisation errors of 6.1 cm in <i>X</i>, 3.5 cm in <i>Y</i> and 3.3 cm in <i>Z</i>, respectively.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"333-342"},"PeriodicalIF":2.0,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91101921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analysis of the synthetic periocular iris images for robust Presentation Attacks Detection algorithms 合成虹膜图像的鲁棒呈现攻击检测算法分析
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-07 DOI: 10.1049/bme2.12084
Jose Maureira, Juan E. Tapia, Claudia Arellano, Christoph Busch

The LivDet-2020 competition focuses on Presentation Attacks Detection (PAD) algorithms, has still open problems, mainly unknown attack scenarios. It is crucial to enhance PAD methods. This can be achieved by augmenting the number of Presentation Attack Instruments (PAI) and Bona fide (genuine) images used to train such algorithms. Unfortunately, the capture and creation of PAI and even the capture of Bona fide images are sometimes complex to achieve. The generation of synthetic images with Generative Adversarial Networks (GAN) algorithms may help and has shown significant improvements in recent years. This paper presents a benchmark of GAN methods to achieve a novel synthetic PAI from a small set of periocular near-infrared images. The best PAI was obtained using StyleGAN2, and it was tested using the best PAD algorithm from the LivDet-2020. The synthetic PAI was able to fool such an algorithm. As a result, all images were classified as Bona fide. A MobileNetV2 was trained using the synthetic PAI as a new class to achieve a more robust PAD. The resulting PAD was able to classify 96.7% of synthetic images as attacks. BPCER10 was 0.24%. Such results demonstrated the need for PAD algorithms to be constantly updated and trained with synthetic images.

LivDet-2020竞赛的重点是呈现攻击检测(PAD)算法,目前仍有开放性问题,主要是未知的攻击场景。改进PAD方法至关重要。这可以通过增加用于训练此类算法的表示攻击工具(PAI)和真实图像的数量来实现。不幸的是,捕获和创建PAI,甚至捕获真实图像有时都很复杂。使用生成对抗网络(GAN)算法生成合成图像可能会有所帮助,并且近年来已经显示出显着的改进。本文提出了一种基于GAN方法的基准算法,从一小组眼周近红外图像中实现一种新的合成PAI。使用StyleGAN2获得最佳PAI,并使用LivDet-2020中的最佳PAI算法进行测试。合成PAI能够骗过这样的算法。结果,所有图像都被归类为真实图像。使用合成PAI作为新类训练MobileNetV2,以实现更健壮的PAD。由此产生的PAD能够将96.7%的合成图像分类为攻击。BPCER10为0.24%。这些结果表明,需要不断更新和训练合成图像的PAD算法。
{"title":"Analysis of the synthetic periocular iris images for robust Presentation Attacks Detection algorithms","authors":"Jose Maureira,&nbsp;Juan E. Tapia,&nbsp;Claudia Arellano,&nbsp;Christoph Busch","doi":"10.1049/bme2.12084","DOIUrl":"10.1049/bme2.12084","url":null,"abstract":"<p>The LivDet-2020 competition focuses on Presentation Attacks Detection (PAD) algorithms, has still open problems, mainly unknown attack scenarios. It is crucial to enhance PAD methods. This can be achieved by augmenting the number of Presentation Attack Instruments (PAI) and Bona fide (genuine) images used to train such algorithms. Unfortunately, the capture and creation of PAI and even the capture of Bona fide images are sometimes complex to achieve. The generation of synthetic images with Generative Adversarial Networks (GAN) algorithms may help and has shown significant improvements in recent years. This paper presents a benchmark of GAN methods to achieve a novel synthetic PAI from a small set of periocular near-infrared images. The best PAI was obtained using StyleGAN2, and it was tested using the best PAD algorithm from the LivDet-2020. The synthetic PAI was able to fool such an algorithm. As a result, all images were classified as Bona fide. A MobileNetV2 was trained using the synthetic PAI as a new class to achieve a more robust PAD. The resulting PAD was able to classify 96.7% of synthetic images as attacks. BPCER<sub>10</sub> was 0.24%. Such results demonstrated the need for PAD algorithms to be constantly updated and trained with synthetic images.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"343-354"},"PeriodicalIF":2.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12084","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82133166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multiresolution synthetic fingerprint generation 多分辨率合成指纹生成
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-03 DOI: 10.1049/bme2.12083
Andre Brasil Vieira Wyzykowski, Mauricio Pamplona Segundo, Rubisley de Paula Lemes

Public access to existing high-resolution databases was discontinued. Besides, a hybrid database that contains fingerprints of different sensors with high and medium resolutions does not exist. A novel hybrid approach to synthesise realistic, multiresolution, and multisensor fingerprints to address these issues is presented. The first step was to improve Anguli, a handcrafted fingerprint generator, to create pores, scratches, and dynamic ridge maps. Using CycleGAN, then the maps are converted into realistic fingerprints, adding textures to images. Unlike other neural network-based methods, the authors’ method generates multiple images with different resolutions and styles for the same identity. With the authors’ approach, a synthetic database with 14,800 fingerprints is built. Besides that, fingerprint recognition experiments with pore- and minutiae-based matching techniques and different fingerprint quality analyses are conducted to confirm the similarity between real and synthetic databases. Finally, a human classification analysis is performed, where volunteers could not distinguish between authentic and synthetic fingerprints. These experiments demonstrate that the authors’ approach is suitable for supporting further fingerprint recognition studies in the absence of real databases.

现有的高分辨率数据库已停止向公众开放。此外,不存在包含不同传感器的高、中分辨率指纹的混合数据库。提出了一种新的混合方法来合成逼真的、多分辨率的和多传感器的指纹来解决这些问题。第一步是改进Anguli,这是一个手工制作的指纹生成器,可以创建毛孔、划痕和动态脊图。然后使用CycleGAN将地图转换成逼真的指纹,并为图像添加纹理。与其他基于神经网络的方法不同,作者的方法为同一身份生成具有不同分辨率和风格的多幅图像。利用作者的方法,建立了一个包含14800个指纹的合成数据库。此外,还进行了基于孔隙和微孔匹配技术的指纹识别实验以及不同指纹质量分析,以验证真实数据库与合成数据库的相似性。最后,进行人类分类分析,志愿者无法区分真实指纹和合成指纹。这些实验表明,作者的方法适合在没有真实数据库的情况下支持进一步的指纹识别研究。
{"title":"Multiresolution synthetic fingerprint generation","authors":"Andre Brasil Vieira Wyzykowski,&nbsp;Mauricio Pamplona Segundo,&nbsp;Rubisley de Paula Lemes","doi":"10.1049/bme2.12083","DOIUrl":"10.1049/bme2.12083","url":null,"abstract":"<p>Public access to existing high-resolution databases was discontinued. Besides, a hybrid database that contains fingerprints of different sensors with high and medium resolutions does not exist. A novel hybrid approach to synthesise realistic, multiresolution, and multisensor fingerprints to address these issues is presented. The first step was to improve Anguli, a handcrafted fingerprint generator, to create pores, scratches, and dynamic ridge maps. Using CycleGAN, then the maps are converted into realistic fingerprints, adding textures to images. Unlike other neural network-based methods, the authors’ method generates multiple images with different resolutions and styles for the same identity. With the authors’ approach, a synthetic database with 14,800 fingerprints is built. Besides that, fingerprint recognition experiments with pore- and minutiae-based matching techniques and different fingerprint quality analyses are conducted to confirm the similarity between real and synthetic databases. Finally, a human classification analysis is performed, where volunteers could not distinguish between authentic and synthetic fingerprints. These experiments demonstrate that the authors’ approach is suitable for supporting further fingerprint recognition studies in the absence of real databases.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"314-332"},"PeriodicalIF":2.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77469700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Forearm multimodal recognition based on IAHP-entropy weight combination 基于IAHP熵权组合的前臂多模态识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-27 DOI: 10.1049/bme2.12080
Chaoying Tang, Mengen Qian, Ru Jia, Haodong Liu, Biao Wang

Biometrics are the among most popular authentication methods due to their advantages over traditional methods, such as higher security, better accuracy and more convenience. The recent COVID-19 pandemic has led to the wide use of face masks, which greatly affects the traditional face recognition technology. The pandemic has also increased the focus on hygienic and contactless identity verification methods. The forearm is a new biometric that contains discriminative information. In this paper, we proposed a multimodal recognition method that combines the veins and geometry of a forearm. Five features are extracted from a forearm Near-Infrared (Near-Infrared) image: SURF, local line structures, global graph representations, forearm width feature and forearm boundary feature. These features are matched individually and then fused at the score level based on the Improved Analytic Hierarchy Process-entropy weight combination. Comprehensive experiments were carried out to evaluate the proposed recognition method and the fusion rule. The matching results showed that the proposed method can achieve a satisfactory performance.

生物识别是最受欢迎的身份验证方法之一,因为它比传统方法具有更高的安全性、更好的准确性和更方便等优点。最近的新冠肺炎大流行导致口罩的广泛使用,这对传统的人脸识别技术产生了很大影响。新冠疫情还增加了对卫生和非接触式身份验证方法的关注。前臂是一种新的生物特征,它包含有判别信息。在本文中,我们提出了一种结合前臂静脉和几何形状的多模式识别方法。从前臂近红外图像中提取了五个特征:SURF、局部线结构、全局图表示、前臂宽度特征和前臂边界特征。这些特征被单独匹配,然后基于改进的层次分析过程熵权组合在分数水平上融合。对所提出的识别方法和融合规则进行了综合实验评价。匹配结果表明,该方法可以获得令人满意的性能。
{"title":"Forearm multimodal recognition based on IAHP-entropy weight combination","authors":"Chaoying Tang,&nbsp;Mengen Qian,&nbsp;Ru Jia,&nbsp;Haodong Liu,&nbsp;Biao Wang","doi":"10.1049/bme2.12080","DOIUrl":"https://doi.org/10.1049/bme2.12080","url":null,"abstract":"<p>Biometrics are the among most popular authentication methods due to their advantages over traditional methods, such as higher security, better accuracy and more convenience. The recent COVID-19 pandemic has led to the wide use of face masks, which greatly affects the traditional face recognition technology. The pandemic has also increased the focus on hygienic and contactless identity verification methods. The forearm is a new biometric that contains discriminative information. In this paper, we proposed a multimodal recognition method that combines the veins and geometry of a forearm. Five features are extracted from a forearm Near-Infrared (Near-Infrared) image: SURF, local line structures, global graph representations, forearm width feature and forearm boundary feature. These features are matched individually and then fused at the score level based on the Improved Analytic Hierarchy Process-entropy weight combination. Comprehensive experiments were carried out to evaluate the proposed recognition method and the fusion rule. The matching results showed that the proposed method can achieve a satisfactory performance.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"52-63"},"PeriodicalIF":2.0,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50146449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards pen-holding hand pose recognition: A new benchmark and a coarse-to-fine PHHP recognition network 面向握笔姿势识别:一个新的基准和粗到精的php识别网络
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-17 DOI: 10.1049/bme2.12079
Pingping Wu, Lunke Fei, Shuyi Li, Shuping Zhao, Xiaozhao Fang, Shaohua Teng

Hand pose recognition has been one of the most fundamental tasks in computer vision and pattern recognition, and substantial effort has been devoted to this field. However, owing to lack of public large-scale benchmark dataset, there is little literature to specially study pen-holding hand pose (PHHP) recognition. As an attempt to fill this gap, in this paper, a PHHP image dataset, consisting of 18,000 PHHP samples is established. To the best of the authors’ knowledge, this is the largest vision-based PHHP dataset ever collected. Furthermore, the authors design a coarse-to-fine PHHP recognition network consisting of a coarse multi-feature learning network and a fine pen-grasping-specific feature learning network, where the coarse learning network aims to extensively exploit the multiple discriminative features by sharing a hand-shape-based spatial attention information, and the fine learning network further learns the pen-grasping-specific features by embedding a couple of convolutional block attention modules into three convolution blocks models. Experimental results show that the authors’ proposed method can achieve a very competitive PHHP recognition performance when compared with the baseline recognition models.

手部姿态识别一直是计算机视觉和模式识别中最基本的任务之一,这一领域已经投入了大量的努力。然而,由于缺乏公开的大规模基准数据集,专门研究握笔姿势识别的文献很少。为了填补这一空白,本文建立了一个由18000个PHHP样本组成的PHHP图像数据集。据作者所知,这是迄今为止收集到的最大的基于视觉的php数据集。此外,作者还设计了一个由粗多特征学习网络和精细抓笔特征学习网络组成的粗到细PHHP识别网络,其中粗学习网络旨在通过共享基于手形的空间注意信息,广泛利用多个判别特征;精细学习网络通过在三个卷积块模型中嵌入两个卷积块注意模块,进一步学习抓笔特定的特征。实验结果表明,与基线识别模型相比,本文提出的方法具有很好的PHHP识别性能。
{"title":"Towards pen-holding hand pose recognition: A new benchmark and a coarse-to-fine PHHP recognition network","authors":"Pingping Wu,&nbsp;Lunke Fei,&nbsp;Shuyi Li,&nbsp;Shuping Zhao,&nbsp;Xiaozhao Fang,&nbsp;Shaohua Teng","doi":"10.1049/bme2.12079","DOIUrl":"10.1049/bme2.12079","url":null,"abstract":"<p>Hand pose recognition has been one of the most fundamental tasks in computer vision and pattern recognition, and substantial effort has been devoted to this field. However, owing to lack of public large-scale benchmark dataset, there is little literature to specially study pen-holding hand pose (PHHP) recognition. As an attempt to fill this gap, in this paper, a PHHP image dataset, consisting of 18,000 PHHP samples is established. To the best of the authors’ knowledge, this is the largest vision-based PHHP dataset ever collected. Furthermore, the authors design a coarse-to-fine PHHP recognition network consisting of a coarse multi-feature learning network and a fine pen-grasping-specific feature learning network, where the coarse learning network aims to extensively exploit the multiple discriminative features by sharing a hand-shape-based spatial attention information, and the fine learning network further learns the pen-grasping-specific features by embedding a couple of convolutional block attention modules into three convolution blocks models. Experimental results show that the authors’ proposed method can achieve a very competitive PHHP recognition performance when compared with the baseline recognition models.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 6","pages":"581-587"},"PeriodicalIF":2.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75725455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition of human Iris for biometric identification using Daugman’s method 用道格曼方法识别人体虹膜进行生物特征识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-14 DOI: 10.1049/bme2.12074
Reend Tawfik Mohammed, Harleen Kaur, Bhavya Alankar, Ritu Chauhan

Iris identification is a well-known technology used to detect striking biometric identification procedures for recognizing human beings based on physical behaviour. The texture of the iris is unique and its anatomy varies from individual to individual. As we know, the physical features of human beings are unique, and they never change; this has led to a significant development in the field of iris recognition. Iris recognition tends to be a reliable domain of technology as it inherits the random variation of the data. In the proposed study of approach, we have designed and implemented a framework using various subsystems, where each phase relates to the other iris recognition system, and these stages are discussed as segmentation, normalisation, and feature encoding. The study is implemented using MATLAB where the results are outcast using the rapid application development (RAD) approach. We have applied the RAD domain, as it has an excellent computing power to generate expeditious results using complex coding, image processing toolbox, and high-level programing methodology. Further, the performance of the technology is tested on two informational groups of eye images MMU Iris database, CASIA V1, CASIA V2, MICHE I, MICHE II iris database, and images captured by iPhone camera and Android phone. The emphasis on the current study of approach is to apply the proposed algorithm to achieve high performance with less ideal conditions.

虹膜识别是一项众所周知的技术,用于检测基于身体行为识别人类的惊人生物特征识别程序。虹膜的纹理是独特的,其解剖结构因人而异。正如我们所知,人类的身体特征是独一无二的,它们永远不会改变;这导致了虹膜识别领域的重大发展。虹膜识别是一个可靠的技术领域,因为它继承了随机变化的数据。在提出的方法研究中,我们设计并实现了一个使用各种子系统的框架,其中每个阶段都与其他虹膜识别系统相关,并且这些阶段被讨论为分割,归一化和特征编码。该研究是使用MATLAB实现的,使用快速应用程序开发(RAD)方法对结果进行丢弃。我们已经应用了RAD领域,因为它具有出色的计算能力,可以使用复杂的编码、图像处理工具箱和高级编程方法生成快速的结果。进一步,在MMU虹膜数据库、CASIA V1、CASIA V2、MICHE I、MICHE II两组人眼图像以及iPhone相机和Android手机拍摄的图像上对该技术的性能进行了测试。目前研究的重点是如何在理想条件较少的情况下实现高性能。
{"title":"Recognition of human Iris for biometric identification using Daugman’s method","authors":"Reend Tawfik Mohammed,&nbsp;Harleen Kaur,&nbsp;Bhavya Alankar,&nbsp;Ritu Chauhan","doi":"10.1049/bme2.12074","DOIUrl":"10.1049/bme2.12074","url":null,"abstract":"<p>Iris identification is a well-known technology used to detect striking biometric identification procedures for recognizing human beings based on physical behaviour. The texture of the iris is unique and its anatomy varies from individual to individual. As we know, the physical features of human beings are unique, and they never change; this has led to a significant development in the field of iris recognition. Iris recognition tends to be a reliable domain of technology as it inherits the random variation of the data. In the proposed study of approach, we have designed and implemented a framework using various subsystems, where each phase relates to the other iris recognition system, and these stages are discussed as segmentation, normalisation, and feature encoding. The study is implemented using MATLAB where the results are outcast using the rapid application development (RAD) approach. We have applied the RAD domain, as it has an excellent computing power to generate expeditious results using complex coding, image processing toolbox, and high-level programing methodology. Further, the performance of the technology is tested on two informational groups of eye images MMU Iris database, CASIA V1, CASIA V2, MICHE I, MICHE II iris database, and images captured by iPhone camera and Android phone. The emphasis on the current study of approach is to apply the proposed algorithm to achieve high performance with less ideal conditions.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"304-313"},"PeriodicalIF":2.0,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89044719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
IET Biometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1