首页 > 最新文献

IET Biometrics最新文献

英文 中文
Masked face recognition: Human versus machine 蒙面人脸识别:人类与机器的对抗
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-07 DOI: 10.1049/bme2.12077
Naser Damer, Fadi Boutros, Marius Süßmilch, Meiling Fang, Florian Kirchbuchner, Arjan Kuijper

The recent COVID-19 pandemic has increased the focus on hygienic and contactless identity verification methods. However, the pandemic led to the wide use of face masks, essential to keep the pandemic under control. The effect of wearing a mask on face recognition (FR) in a collaborative environment is a currently sensitive yet understudied issue. Recent reports have tackled this by evaluating the masked probe effect on the performance of automatic FR solutions. However, such solutions can fail in certain processes, leading to the verification task being performed by a human expert. This work provides a joint evaluation and in-depth analyses of the face verification performance of human experts in comparison to state-of-the-art automatic FR solutions. This involves an extensive evaluation by human experts and 4 automatic recognition solutions. The study concludes with a set of take-home messages on different aspects of the correlation between the verification behaviour of humans and machines.

最近的COVID-19大流行增加了对卫生和非接触式身份验证方法的关注。然而,这次大流行导致人们广泛使用口罩,这对控制疫情至关重要。在协作环境中,戴口罩对人脸识别的影响是目前一个敏感但尚未得到充分研究的问题。最近的报告通过评估屏蔽探针对自动FR解决方案性能的影响来解决这个问题。然而,这样的解决方案在某些过程中可能会失败,从而导致由人类专家执行验证任务。与最先进的自动人脸识别解决方案相比,这项工作提供了对人类专家人脸验证性能的联合评估和深入分析。这涉及到人类专家的广泛评估和4个自动识别解决方案。该研究总结了一系列关于人类和机器验证行为之间相关性的不同方面的信息。
{"title":"Masked face recognition: Human versus machine","authors":"Naser Damer,&nbsp;Fadi Boutros,&nbsp;Marius Süßmilch,&nbsp;Meiling Fang,&nbsp;Florian Kirchbuchner,&nbsp;Arjan Kuijper","doi":"10.1049/bme2.12077","DOIUrl":"10.1049/bme2.12077","url":null,"abstract":"<p>The recent COVID-19 pandemic has increased the focus on hygienic and contactless identity verification methods. However, the pandemic led to the wide use of face masks, essential to keep the pandemic under control. The effect of wearing a mask on face recognition (FR) in a collaborative environment is a currently sensitive yet understudied issue. Recent reports have tackled this by evaluating the masked probe effect on the performance of automatic FR solutions. However, such solutions can fail in certain processes, leading to the verification task being performed by a human expert. This work provides a joint evaluation and in-depth analyses of the face verification performance of human experts in comparison to state-of-the-art automatic FR solutions. This involves an extensive evaluation by human experts and 4 automatic recognition solutions. The study concludes with a set of take-home messages on different aspects of the correlation between the verification behaviour of humans and machines.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"512-528"},"PeriodicalIF":2.0,"publicationDate":"2022-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90566262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Lip print-based identification using traditional and deep learning 基于传统和深度学习的唇印识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-05 DOI: 10.1049/bme2.12073
Wardah Farrukh, Dustin van der Haar

The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity of individuals on occasions where the above techniques are unavailable. One emerging method of human recognition is lip-based identification. It can be treated as a new kind of biometric measure. The patterns found on the human lip are permanent unless subjected to alternations or trauma. Therefore, lip prints can serve the purpose of confirming an individual's identity. The main objective of this work is to design experiments using computer vision methods that can recognise an individual solely based on their lip prints. This article compares traditional and deep learning computer vision methods and how they perform on a common dataset for lip-based identification. The first pipeline is a traditional method with Speeded Up Robust Features with either an SVM or K-NN machine learning classifier, which achieved an accuracy of 95.45% and 94.31%, respectively. A second pipeline compares the performance of the VGG16 and VGG19 deep learning architectures. This approach obtained an accuracy of 91.53% and 93.22%, respectively.

生物特征识别的概念围绕着这样一种理论,即每个人都是独一无二的,都有独特的特征。为此采用了指纹、人脸、虹膜或视网膜等各种指标。尽管如此,在无法使用上述技术的情况下,仍需要新的替代方案来确定个人身份。一种新兴的人类识别方法是基于嘴唇的识别。它可以被视为一种新的生物特征测量。在人类嘴唇上发现的图案是永久性的,除非受到改变或创伤。因此,唇印可以起到确认个人身份的作用。这项工作的主要目标是使用计算机视觉方法设计实验,该方法可以仅根据个人的唇印识别个人。本文比较了传统的和深度学习的计算机视觉方法,以及它们在基于嘴唇识别的通用数据集上的表现。第一个流水线是一种具有加速鲁棒特征的传统方法,使用SVM或K-NN机器学习分类器,其准确率分别为95.45%和94.31%。第二条流水线比较了VGG16和VGG19深度学习架构的性能。该方法的准确率分别为91.53%和93.22%。
{"title":"Lip print-based identification using traditional and deep learning","authors":"Wardah Farrukh,&nbsp;Dustin van der Haar","doi":"10.1049/bme2.12073","DOIUrl":"https://doi.org/10.1049/bme2.12073","url":null,"abstract":"<p>The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity of individuals on occasions where the above techniques are unavailable. One emerging method of human recognition is lip-based identification. It can be treated as a new kind of biometric measure. The patterns found on the human lip are permanent unless subjected to alternations or trauma. Therefore, lip prints can serve the purpose of confirming an individual's identity. The main objective of this work is to design experiments using computer vision methods that can recognise an individual solely based on their lip prints. This article compares traditional and deep learning computer vision methods and how they perform on a common dataset for lip-based identification. The first pipeline is a traditional method with Speeded Up Robust Features with either an SVM or K-NN machine learning classifier, which achieved an accuracy of 95.45% and 94.31%, respectively. A second pipeline compares the performance of the VGG16 and VGG19 deep learning architectures. This approach obtained an accuracy of 91.53% and 93.22%, respectively.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"1-12"},"PeriodicalIF":2.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50121827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Time–frequency fusion learning for photoplethysmography biometric recognition 光体积脉搏波生物识别的时频融合学习
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-12 DOI: 10.1049/bme2.12070
Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang

Photoplethysmography (PPG) signal is a novel biometric trait related to the identity of people; many time- and frequency-domain methods for PPG biometric recognition have been proposed. However, the existing domain methods for PPG biometric recognition only consider a single domain or the feature-level fusion of time and frequency domains, without considering the exploration of the fusion correlations of the time and frequency domains. The authors propose a time–frequency fusion for a PPG biometric recognition method with collective matrix factorisation (TFCMF) that leverages collective matrix factorisation to learn a shared latent semantic space by exploring the fusion correlations of the time and frequency domains. In addition, the authors utilise the 2,1 norm to constrain the reconstruction error and shared matrix, which can alleviate the influence of noise and intra-class variation, and ensure the robustness of learnt semantic space. Experiments demonstrate that TFCMF has better recognition performance than current state-of-the-art methods for PPG biometric recognition.

光体积脉搏波(PPG)信号是一种与人的身份相关的新型生物特征。提出了多种时域和频域的PPG生物特征识别方法。然而,现有的PPG生物特征识别的域方法只考虑单个域或特征级的时频融合,没有考虑对时频融合相关性的探索。作者提出了一种基于集体矩阵分解(TFCMF)的PPG生物特征识别方法的时频融合,该方法利用集体矩阵分解通过探索时域和频域的融合相关性来学习共享的潜在语义空间。此外,利用1,1范数约束重构误差和共享矩阵,减轻了噪声和类内变化的影响,保证了学习到的语义空间的鲁棒性。实验表明,TFCMF在PPG生物特征识别中具有比目前最先进的识别方法更好的识别性能。
{"title":"Time–frequency fusion learning for photoplethysmography biometric recognition","authors":"Chunying Liu,&nbsp;Jijiang Yu,&nbsp;Yuwen Huang,&nbsp;Fuxian Huang","doi":"10.1049/bme2.12070","DOIUrl":"https://doi.org/10.1049/bme2.12070","url":null,"abstract":"<p>Photoplethysmography (PPG) signal is a novel biometric trait related to the identity of people; many time- and frequency-domain methods for PPG biometric recognition have been proposed. However, the existing domain methods for PPG biometric recognition only consider a single domain or the feature-level fusion of time and frequency domains, without considering the exploration of the fusion correlations of the time and frequency domains. The authors propose a time–frequency fusion for a PPG biometric recognition method with collective matrix factorisation (TFCMF) that leverages collective matrix factorisation to learn a shared latent semantic space by exploring the fusion correlations of the time and frequency domains. In addition, the authors utilise the <i>ℓ</i><sub>2,1</sub> norm to constrain the reconstruction error and shared matrix, which can alleviate the influence of noise and intra-class variation, and ensure the robustness of learnt semantic space. Experiments demonstrate that TFCMF has better recognition performance than current state-of-the-art methods for PPG biometric recognition.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"187-198"},"PeriodicalIF":2.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91827864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Time-frequency fusion learning for photoplethysmography biometric recognition 光体积脉搏波生物识别的时频融合学习
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-12 DOI: 10.1049/bme2.12070
Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang
{"title":"Time-frequency fusion learning for photoplethysmography biometric recognition","authors":"Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang","doi":"10.1049/bme2.12070","DOIUrl":"https://doi.org/10.1049/bme2.12070","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"8 1","pages":"187-198"},"PeriodicalIF":2.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86318191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using double attention for text tattoo localisation 利用双重注意力进行文字纹身定位
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-08 DOI: 10.1049/bme2.12071
Xingpeng Xu, S. Prasad, Kuanhong Cheng, A. Kong
{"title":"Using double attention for text tattoo localisation","authors":"Xingpeng Xu, S. Prasad, Kuanhong Cheng, A. Kong","doi":"10.1049/bme2.12071","DOIUrl":"https://doi.org/10.1049/bme2.12071","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"20 1","pages":"199-214"},"PeriodicalIF":2.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90173018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using double attention for text tattoo localisation 利用双重注意力进行文字纹身定位
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-08 DOI: 10.1049/bme2.12071
Xingpeng Xu, Shitala Prasad, Kuanhong Cheng, Adams Wai Kin Kong

Text tattoos contain rich information about an individual for forensic investigation. To extract this information, text tattoo localisation is the first and essential step. Previous tattoo studies applied existing object detectors to detect general tattoos, but none of them considered text tattoo localisation and they neglect the prior knowledge that text tattoos are usually inside or nearby larger tattoos and appear only on human skin. To use this prior knowledge, a prior knowledge-based attention mechanism (PKAM) and a network named Text Tattoo Localisation Network based on Double Attention (TTLN-DA) are proposed. In addition to TTLN-DA, two variants of TTLN-DA are designed to study the effectiveness of different prior knowledge. For this study, NTU Tattoo V2, the largest tattoo dataset and NTU Text Tattoo V1, the largest text tattoo dataset are established. To examine the importance of the prior knowledge and the effectiveness of the proposed attention mechanism and the networks, TTLN-DA and its variants are compared with state-of-the-art object detectors and text detectors. The experimental results indicate that the prior knowledge is vital for text tattoo localisation; The PKAM contributes significantly to the performance and TTLN-DA outperforms the state-of-the-art object detectors and scene text detectors.

文身包含了个人的丰富信息,便于法医调查。为了提取这些信息,文本纹身定位是第一步也是必不可少的一步。以前的纹身研究使用现有的物体检测器来检测一般的纹身,但他们都没有考虑到文字纹身的定位,他们忽略了之前的知识,即文字纹身通常在较大的纹身内部或附近,只出现在人体皮肤上。为了利用这些先验知识,提出了基于先验知识的注意机制(PKAM)和基于双重注意的文本纹身定位网络(TTLN-DA)。除了TTLN-DA之外,还设计了TTLN-DA的两个变体来研究不同先验知识的有效性。本研究建立了最大的纹身数据集NTU Tattoo V2和最大的文字纹身数据集NTU Text Tattoo V1。为了检验先验知识的重要性以及所提出的注意机制和网络的有效性,将TTLN-DA及其变体与最先进的对象检测器和文本检测器进行了比较。实验结果表明,先验知识对文本纹身定位至关重要;PKAM对性能有显著贡献,TTLN-DA优于最先进的目标检测器和场景文本检测器。
{"title":"Using double attention for text tattoo localisation","authors":"Xingpeng Xu,&nbsp;Shitala Prasad,&nbsp;Kuanhong Cheng,&nbsp;Adams Wai Kin Kong","doi":"10.1049/bme2.12071","DOIUrl":"https://doi.org/10.1049/bme2.12071","url":null,"abstract":"<p>Text tattoos contain rich information about an individual for forensic investigation. To extract this information, text tattoo localisation is the first and essential step. Previous tattoo studies applied existing object detectors to detect general tattoos, but none of them considered text tattoo localisation and they neglect the prior knowledge that text tattoos are usually inside or nearby larger tattoos and appear only on human skin. To use this prior knowledge, a prior knowledge-based attention mechanism (PKAM) and a network named Text Tattoo Localisation Network based on Double Attention (TTLN-DA) are proposed. In addition to TTLN-DA, two variants of TTLN-DA are designed to study the effectiveness of different prior knowledge. For this study, NTU Tattoo V2, the largest tattoo dataset and NTU Text Tattoo V1, the largest text tattoo dataset are established. To examine the importance of the prior knowledge and the effectiveness of the proposed attention mechanism and the networks, TTLN-DA and its variants are compared with state-of-the-art object detectors and text detectors. The experimental results indicate that the prior knowledge is vital for text tattoo localisation; The PKAM contributes significantly to the performance and TTLN-DA outperforms the state-of-the-art object detectors and scene text detectors.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"199-214"},"PeriodicalIF":2.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91813390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reliable detection of doppelgängers based on deep face representations 基于深度人脸表征的doppelgängers可靠检测
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-04 DOI: 10.1049/bme2.12072
Christian Rathgeb, Daniel Fischer, Pawel Drozdowski, Christoph Busch

Doppelgängers (or lookalikes) usually yield an increased probability of false matches in a facial recognition system, as opposed to random face image pairs selected for non-mated comparison trials. In this work, the impact of doppelgängers on the HDA Doppelgänger and Disguised Faces in The Wild databases is assessed using a state-of-the-art face recognition system. It is found that doppelgänger image pairs yield very high similarity scores resulting in a significant increase of false match rates. Further, a doppelgänger detection method is proposed, which distinguishes doppelgängers from mated comparison trials by analysing differences in deep representations obtained from face image pairs. The proposed detection system employs a machine learning-based classifier, which is trained with generated doppelgänger image pairs utilising face morphing techniques. Experimental evaluations conducted on the HDA Doppelgänger and Look-Alike Face databases reveal a detection equal error rate of approximately 2.7% for the task of separating mated authentication attempts from doppelgängers.

在面部识别系统中,Doppelgängers(或长相相似者)通常会增加错误匹配的可能性,而不是随机选择非配对比较试验的面部图像对。在这项工作中,使用最先进的面部识别系统评估doppelgängers对HDA Doppelgänger和the Wild数据库中伪装面部的影响。发现doppelgänger图像对产生非常高的相似分数,导致错误匹配率显着增加。此外,提出了一种doppelgänger检测方法,通过分析从人脸图像对中获得的深度表征的差异,将doppelgängers与配对比较试验区分开来。所提出的检测系统采用基于机器学习的分类器,该分类器使用人脸变形技术生成的doppelgänger图像对进行训练。在HDA Doppelgänger和Look-Alike Face数据库上进行的实验评估显示,从doppelgängers分离配对身份验证尝试的任务的检测错误率约为2.7%。
{"title":"Reliable detection of doppelgängers based on deep face representations","authors":"Christian Rathgeb,&nbsp;Daniel Fischer,&nbsp;Pawel Drozdowski,&nbsp;Christoph Busch","doi":"10.1049/bme2.12072","DOIUrl":"https://doi.org/10.1049/bme2.12072","url":null,"abstract":"<p>Doppelgängers (or lookalikes) usually yield an increased probability of false matches in a facial recognition system, as opposed to random face image pairs selected for non-mated comparison trials. In this work, the impact of doppelgängers on the HDA Doppelgänger and Disguised Faces in The Wild databases is assessed using a state-of-the-art face recognition system. It is found that doppelgänger image pairs yield very high similarity scores resulting in a significant increase of false match rates. Further, a doppelgänger detection method is proposed, which distinguishes doppelgängers from mated comparison trials by analysing differences in deep representations obtained from face image pairs. The proposed detection system employs a machine learning-based classifier, which is trained with generated doppelgänger image pairs utilising face morphing techniques. Experimental evaluations conducted on the HDA Doppelgänger and Look-Alike Face databases reveal a detection equal error rate of approximately 2.7% for the task of separating mated authentication attempts from doppelgängers.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"215-224"},"PeriodicalIF":2.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91797536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Profile to frontal face recognition in the wild using coupled conditional generative adversarial network 利用耦合条件生成对抗网络对野外侧面人脸进行识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-10 DOI: 10.1049/bme2.12069
Fariborz Taherkhani, Veeru Talreja, Jeremy Dawson, Matthew C. Valenti, Nasser M. Nasrabadi

In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.

近年来,随着深度学习的出现,人脸识别(FR)取得了非凡的成功。然而,与侧面人脸相比,许多深度FR模型在处理正面人脸方面表现得更好。轮廓面处理性能差的主要原因是很难学习对轮廓FR有用的位姿不变深度表示。在本文中,作者假设轮廓面域在潜在特征子空间中与正面面域具有潜在联系。作者希望通过将侧面脸和正面脸投射到一个共同的潜在子空间中并在潜在域中进行验证或检索来利用这种潜在联系。利用耦合条件生成对抗网络(cpGAN)结构在潜在公共嵌入子空间中寻找侧面和正面图像之间的隐藏关系。具体来说,cpGAN框架由两个基于条件gan的子网络组成,一个专用于正面域,另一个专用于轮廓域。每个子网络倾向于在公共嵌入特征子空间中找到最大两个特征域之间成对相关性的投影。通过使用CFP、CMU Multi-PIE、IARPA Janus Benchmark A和IARPA Janus Benchmark C数据集,证明了作者方法与最新技术相比的有效性。此外,作者还实现了一个耦合卷积神经网络(cpCNN)和一个对抗性判别域自适应网络(ADDA),用于剖面到正面FR。作者评估了cpCNN和ADDA的性能,并将其与所提出的cpGAN进行了比较。最后,作者还评估了作者的cpGAN从VGGFace2数据集中包含的输入轮廓面重建正面面。
{"title":"Profile to frontal face recognition in the wild using coupled conditional generative adversarial network","authors":"Fariborz Taherkhani,&nbsp;Veeru Talreja,&nbsp;Jeremy Dawson,&nbsp;Matthew C. Valenti,&nbsp;Nasser M. Nasrabadi","doi":"10.1049/bme2.12069","DOIUrl":"https://doi.org/10.1049/bme2.12069","url":null,"abstract":"<p>In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"260-276"},"PeriodicalIF":2.0,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91822997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Profile to frontal face recognition in the wild using coupled conditional generative adversarial network 利用耦合条件生成对抗网络对野外侧面人脸进行识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-10 DOI: 10.1049/bme2.12069
Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi
{"title":"Profile to frontal face recognition in the wild using coupled conditional generative adversarial network","authors":"Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi","doi":"10.1049/bme2.12069","DOIUrl":"https://doi.org/10.1049/bme2.12069","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"423 1","pages":"260-276"},"PeriodicalIF":2.0,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77005470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Recognition of the finger vascular system using multi-wavelength imaging 多波长成像识别手指血管系统
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-05 DOI: 10.1049/bme2.12068
Tomasz Moroń, Krzysztof Bernacki, Jerzy Fiołka, Jia Peng, Adam Popowicz

There has recently been intensive development of methods for identification and personal verification using the human finger vascular system (FVS). The primary focus of these efforts has been the increasingly sophisticated methods of image processing, and frequently employing machine learning. In this article, we present a new concept of imaging in which the finger vasculature is illuminated using different wavelengths of light, generating multiple FVS images. We hypothesised that the analysis of these image sets, instead of individual images, could increase the effectiveness of identification. Analyses of data from over 100 volunteers, using five different deterministic methods for feature extraction, consistently demonstrated improved identification efficiency with the addition of data obtained from another wavelength. The best results were seen for combinations of diodes between 800 and 900 nm. Finger vascular system observations outside this range were of marginal utility. The knowledge gained from this experiment can be utilised by designers of biometric recognition devices leveraging FVS technology. Our results confirm that developments in this field are not restricted to image processing algorithms, and that hardware innovations remain relevant.

最近,利用人体手指血管系统(FVS)进行识别和个人验证的方法得到了广泛的发展。这些努力的主要焦点是日益复杂的图像处理方法,并经常使用机器学习。在本文中,我们提出了一种新的成像概念,其中手指血管系统使用不同波长的光照射,产生多个FVS图像。我们假设分析这些图像集,而不是单个图像,可以提高识别的有效性。对来自100多名志愿者的数据进行分析,使用五种不同的确定性特征提取方法,一致证明了在添加从另一个波长获得的数据后,识别效率得到了提高。最好的结果是在800到900纳米之间的二极管组合。手指血管系统在这个范围之外的观察是边际效用。从这个实验中获得的知识可以被利用FVS技术的生物识别设备的设计者所利用。我们的研究结果证实,这一领域的发展并不局限于图像处理算法,硬件创新仍然相关。
{"title":"Recognition of the finger vascular system using multi-wavelength imaging","authors":"Tomasz Moroń,&nbsp;Krzysztof Bernacki,&nbsp;Jerzy Fiołka,&nbsp;Jia Peng,&nbsp;Adam Popowicz","doi":"10.1049/bme2.12068","DOIUrl":"https://doi.org/10.1049/bme2.12068","url":null,"abstract":"<p>There has recently been intensive development of methods for identification and personal verification using the human finger vascular system (FVS). The primary focus of these efforts has been the increasingly sophisticated methods of image processing, and frequently employing machine learning. In this article, we present a new concept of imaging in which the finger vasculature is illuminated using different wavelengths of light, generating multiple FVS images. We hypothesised that the analysis of these image sets, instead of individual images, could increase the effectiveness of identification. Analyses of data from over 100 volunteers, using five different deterministic methods for feature extraction, consistently demonstrated improved identification efficiency with the addition of data obtained from another wavelength. The best results were seen for combinations of diodes between 800 and 900 nm. Finger vascular system observations outside this range were of marginal utility. The knowledge gained from this experiment can be utilised by designers of biometric recognition devices leveraging FVS technology. Our results confirm that developments in this field are not restricted to image processing algorithms, and that hardware innovations remain relevant.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"249-259"},"PeriodicalIF":2.0,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91803974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IET Biometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1