首页 > 最新文献

IET Biometrics最新文献

英文 中文
Time-frequency fusion learning for photoplethysmography biometric recognition 光体积脉搏波生物识别的时频融合学习
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-04-12 DOI: 10.1049/bme2.12070
Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang
{"title":"Time-frequency fusion learning for photoplethysmography biometric recognition","authors":"Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang","doi":"10.1049/bme2.12070","DOIUrl":"https://doi.org/10.1049/bme2.12070","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86318191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using double attention for text tattoo localisation 利用双重注意力进行文字纹身定位
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-04-08 DOI: 10.1049/bme2.12071
Xingpeng Xu, S. Prasad, Kuanhong Cheng, A. Kong
{"title":"Using double attention for text tattoo localisation","authors":"Xingpeng Xu, S. Prasad, Kuanhong Cheng, A. Kong","doi":"10.1049/bme2.12071","DOIUrl":"https://doi.org/10.1049/bme2.12071","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90173018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using double attention for text tattoo localisation 利用双重注意力进行文字纹身定位
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-04-08 DOI: 10.1049/bme2.12071
Xingpeng Xu, Shitala Prasad, Kuanhong Cheng, Adams Wai Kin Kong

Text tattoos contain rich information about an individual for forensic investigation. To extract this information, text tattoo localisation is the first and essential step. Previous tattoo studies applied existing object detectors to detect general tattoos, but none of them considered text tattoo localisation and they neglect the prior knowledge that text tattoos are usually inside or nearby larger tattoos and appear only on human skin. To use this prior knowledge, a prior knowledge-based attention mechanism (PKAM) and a network named Text Tattoo Localisation Network based on Double Attention (TTLN-DA) are proposed. In addition to TTLN-DA, two variants of TTLN-DA are designed to study the effectiveness of different prior knowledge. For this study, NTU Tattoo V2, the largest tattoo dataset and NTU Text Tattoo V1, the largest text tattoo dataset are established. To examine the importance of the prior knowledge and the effectiveness of the proposed attention mechanism and the networks, TTLN-DA and its variants are compared with state-of-the-art object detectors and text detectors. The experimental results indicate that the prior knowledge is vital for text tattoo localisation; The PKAM contributes significantly to the performance and TTLN-DA outperforms the state-of-the-art object detectors and scene text detectors.

文身包含了个人的丰富信息,便于法医调查。为了提取这些信息,文本纹身定位是第一步也是必不可少的一步。以前的纹身研究使用现有的物体检测器来检测一般的纹身,但他们都没有考虑到文字纹身的定位,他们忽略了之前的知识,即文字纹身通常在较大的纹身内部或附近,只出现在人体皮肤上。为了利用这些先验知识,提出了基于先验知识的注意机制(PKAM)和基于双重注意的文本纹身定位网络(TTLN-DA)。除了TTLN-DA之外,还设计了TTLN-DA的两个变体来研究不同先验知识的有效性。本研究建立了最大的纹身数据集NTU Tattoo V2和最大的文字纹身数据集NTU Text Tattoo V1。为了检验先验知识的重要性以及所提出的注意机制和网络的有效性,将TTLN-DA及其变体与最先进的对象检测器和文本检测器进行了比较。实验结果表明,先验知识对文本纹身定位至关重要;PKAM对性能有显著贡献,TTLN-DA优于最先进的目标检测器和场景文本检测器。
{"title":"Using double attention for text tattoo localisation","authors":"Xingpeng Xu,&nbsp;Shitala Prasad,&nbsp;Kuanhong Cheng,&nbsp;Adams Wai Kin Kong","doi":"10.1049/bme2.12071","DOIUrl":"https://doi.org/10.1049/bme2.12071","url":null,"abstract":"<p>Text tattoos contain rich information about an individual for forensic investigation. To extract this information, text tattoo localisation is the first and essential step. Previous tattoo studies applied existing object detectors to detect general tattoos, but none of them considered text tattoo localisation and they neglect the prior knowledge that text tattoos are usually inside or nearby larger tattoos and appear only on human skin. To use this prior knowledge, a prior knowledge-based attention mechanism (PKAM) and a network named Text Tattoo Localisation Network based on Double Attention (TTLN-DA) are proposed. In addition to TTLN-DA, two variants of TTLN-DA are designed to study the effectiveness of different prior knowledge. For this study, NTU Tattoo V2, the largest tattoo dataset and NTU Text Tattoo V1, the largest text tattoo dataset are established. To examine the importance of the prior knowledge and the effectiveness of the proposed attention mechanism and the networks, TTLN-DA and its variants are compared with state-of-the-art object detectors and text detectors. The experimental results indicate that the prior knowledge is vital for text tattoo localisation; The PKAM contributes significantly to the performance and TTLN-DA outperforms the state-of-the-art object detectors and scene text detectors.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91813390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reliable detection of doppelgängers based on deep face representations 基于深度人脸表征的doppelgängers可靠检测
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-04-04 DOI: 10.1049/bme2.12072
Christian Rathgeb, Daniel Fischer, Pawel Drozdowski, Christoph Busch

Doppelgängers (or lookalikes) usually yield an increased probability of false matches in a facial recognition system, as opposed to random face image pairs selected for non-mated comparison trials. In this work, the impact of doppelgängers on the HDA Doppelgänger and Disguised Faces in The Wild databases is assessed using a state-of-the-art face recognition system. It is found that doppelgänger image pairs yield very high similarity scores resulting in a significant increase of false match rates. Further, a doppelgänger detection method is proposed, which distinguishes doppelgängers from mated comparison trials by analysing differences in deep representations obtained from face image pairs. The proposed detection system employs a machine learning-based classifier, which is trained with generated doppelgänger image pairs utilising face morphing techniques. Experimental evaluations conducted on the HDA Doppelgänger and Look-Alike Face databases reveal a detection equal error rate of approximately 2.7% for the task of separating mated authentication attempts from doppelgängers.

在面部识别系统中,Doppelgängers(或长相相似者)通常会增加错误匹配的可能性,而不是随机选择非配对比较试验的面部图像对。在这项工作中,使用最先进的面部识别系统评估doppelgängers对HDA Doppelgänger和the Wild数据库中伪装面部的影响。发现doppelgänger图像对产生非常高的相似分数,导致错误匹配率显着增加。此外,提出了一种doppelgänger检测方法,通过分析从人脸图像对中获得的深度表征的差异,将doppelgängers与配对比较试验区分开来。所提出的检测系统采用基于机器学习的分类器,该分类器使用人脸变形技术生成的doppelgänger图像对进行训练。在HDA Doppelgänger和Look-Alike Face数据库上进行的实验评估显示,从doppelgängers分离配对身份验证尝试的任务的检测错误率约为2.7%。
{"title":"Reliable detection of doppelgängers based on deep face representations","authors":"Christian Rathgeb,&nbsp;Daniel Fischer,&nbsp;Pawel Drozdowski,&nbsp;Christoph Busch","doi":"10.1049/bme2.12072","DOIUrl":"https://doi.org/10.1049/bme2.12072","url":null,"abstract":"<p>Doppelgängers (or lookalikes) usually yield an increased probability of false matches in a facial recognition system, as opposed to random face image pairs selected for non-mated comparison trials. In this work, the impact of doppelgängers on the HDA Doppelgänger and Disguised Faces in The Wild databases is assessed using a state-of-the-art face recognition system. It is found that doppelgänger image pairs yield very high similarity scores resulting in a significant increase of false match rates. Further, a doppelgänger detection method is proposed, which distinguishes doppelgängers from mated comparison trials by analysing differences in deep representations obtained from face image pairs. The proposed detection system employs a machine learning-based classifier, which is trained with generated doppelgänger image pairs utilising face morphing techniques. Experimental evaluations conducted on the HDA Doppelgänger and Look-Alike Face databases reveal a detection equal error rate of approximately 2.7% for the task of separating mated authentication attempts from doppelgängers.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91797536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Profile to frontal face recognition in the wild using coupled conditional generative adversarial network 利用耦合条件生成对抗网络对野外侧面人脸进行识别
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-03-10 DOI: 10.1049/bme2.12069
Fariborz Taherkhani, Veeru Talreja, Jeremy Dawson, Matthew C. Valenti, Nasser M. Nasrabadi

In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.

近年来,随着深度学习的出现,人脸识别(FR)取得了非凡的成功。然而,与侧面人脸相比,许多深度FR模型在处理正面人脸方面表现得更好。轮廓面处理性能差的主要原因是很难学习对轮廓FR有用的位姿不变深度表示。在本文中,作者假设轮廓面域在潜在特征子空间中与正面面域具有潜在联系。作者希望通过将侧面脸和正面脸投射到一个共同的潜在子空间中并在潜在域中进行验证或检索来利用这种潜在联系。利用耦合条件生成对抗网络(cpGAN)结构在潜在公共嵌入子空间中寻找侧面和正面图像之间的隐藏关系。具体来说,cpGAN框架由两个基于条件gan的子网络组成,一个专用于正面域,另一个专用于轮廓域。每个子网络倾向于在公共嵌入特征子空间中找到最大两个特征域之间成对相关性的投影。通过使用CFP、CMU Multi-PIE、IARPA Janus Benchmark A和IARPA Janus Benchmark C数据集,证明了作者方法与最新技术相比的有效性。此外,作者还实现了一个耦合卷积神经网络(cpCNN)和一个对抗性判别域自适应网络(ADDA),用于剖面到正面FR。作者评估了cpCNN和ADDA的性能,并将其与所提出的cpGAN进行了比较。最后,作者还评估了作者的cpGAN从VGGFace2数据集中包含的输入轮廓面重建正面面。
{"title":"Profile to frontal face recognition in the wild using coupled conditional generative adversarial network","authors":"Fariborz Taherkhani,&nbsp;Veeru Talreja,&nbsp;Jeremy Dawson,&nbsp;Matthew C. Valenti,&nbsp;Nasser M. Nasrabadi","doi":"10.1049/bme2.12069","DOIUrl":"https://doi.org/10.1049/bme2.12069","url":null,"abstract":"<p>In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91822997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Profile to frontal face recognition in the wild using coupled conditional generative adversarial network 利用耦合条件生成对抗网络对野外侧面人脸进行识别
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-03-10 DOI: 10.1049/bme2.12069
Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi
{"title":"Profile to frontal face recognition in the wild using coupled conditional generative adversarial network","authors":"Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi","doi":"10.1049/bme2.12069","DOIUrl":"https://doi.org/10.1049/bme2.12069","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77005470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Recognition of the finger vascular system using multi-wavelength imaging 多波长成像识别手指血管系统
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-03-05 DOI: 10.1049/bme2.12068
Tomasz Moroń, Krzysztof Bernacki, Jerzy Fiołka, Jia Peng, Adam Popowicz

There has recently been intensive development of methods for identification and personal verification using the human finger vascular system (FVS). The primary focus of these efforts has been the increasingly sophisticated methods of image processing, and frequently employing machine learning. In this article, we present a new concept of imaging in which the finger vasculature is illuminated using different wavelengths of light, generating multiple FVS images. We hypothesised that the analysis of these image sets, instead of individual images, could increase the effectiveness of identification. Analyses of data from over 100 volunteers, using five different deterministic methods for feature extraction, consistently demonstrated improved identification efficiency with the addition of data obtained from another wavelength. The best results were seen for combinations of diodes between 800 and 900 nm. Finger vascular system observations outside this range were of marginal utility. The knowledge gained from this experiment can be utilised by designers of biometric recognition devices leveraging FVS technology. Our results confirm that developments in this field are not restricted to image processing algorithms, and that hardware innovations remain relevant.

最近,利用人体手指血管系统(FVS)进行识别和个人验证的方法得到了广泛的发展。这些努力的主要焦点是日益复杂的图像处理方法,并经常使用机器学习。在本文中,我们提出了一种新的成像概念,其中手指血管系统使用不同波长的光照射,产生多个FVS图像。我们假设分析这些图像集,而不是单个图像,可以提高识别的有效性。对来自100多名志愿者的数据进行分析,使用五种不同的确定性特征提取方法,一致证明了在添加从另一个波长获得的数据后,识别效率得到了提高。最好的结果是在800到900纳米之间的二极管组合。手指血管系统在这个范围之外的观察是边际效用。从这个实验中获得的知识可以被利用FVS技术的生物识别设备的设计者所利用。我们的研究结果证实,这一领域的发展并不局限于图像处理算法,硬件创新仍然相关。
{"title":"Recognition of the finger vascular system using multi-wavelength imaging","authors":"Tomasz Moroń,&nbsp;Krzysztof Bernacki,&nbsp;Jerzy Fiołka,&nbsp;Jia Peng,&nbsp;Adam Popowicz","doi":"10.1049/bme2.12068","DOIUrl":"https://doi.org/10.1049/bme2.12068","url":null,"abstract":"<p>There has recently been intensive development of methods for identification and personal verification using the human finger vascular system (FVS). The primary focus of these efforts has been the increasingly sophisticated methods of image processing, and frequently employing machine learning. In this article, we present a new concept of imaging in which the finger vasculature is illuminated using different wavelengths of light, generating multiple FVS images. We hypothesised that the analysis of these image sets, instead of individual images, could increase the effectiveness of identification. Analyses of data from over 100 volunteers, using five different deterministic methods for feature extraction, consistently demonstrated improved identification efficiency with the addition of data obtained from another wavelength. The best results were seen for combinations of diodes between 800 and 900 nm. Finger vascular system observations outside this range were of marginal utility. The knowledge gained from this experiment can be utilised by designers of biometric recognition devices leveraging FVS technology. Our results confirm that developments in this field are not restricted to image processing algorithms, and that hardware innovations remain relevant.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91803974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition of the finger vascular system using multi-wavelength imaging 多波长成像识别手指血管系统
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-03-05 DOI: 10.1049/bme2.12068
Tomasz Moron, K. Bernacki, J. Fiolka, Jia Peng, A. Popowicz
{"title":"Recognition of the finger vascular system using multi-wavelength imaging","authors":"Tomasz Moron, K. Bernacki, J. Fiolka, Jia Peng, A. Popowicz","doi":"10.1049/bme2.12068","DOIUrl":"https://doi.org/10.1049/bme2.12068","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90390516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person 对应的关键点约束稀疏表示三维人耳识别,每个人一个样本
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-03-02 DOI: 10.1049/bme2.12067
Qinping Zhu, Zhichun Mu, Li Yuan

When only one sample per person (OSPP) is registered in the gallery, it is difficult for ear recognition methods to sufficiently and effectively reduce the search range of the matching features, thus resulting in low computational efficiency and mismatch problems. A 3D ear biometric system using OSPP is proposed to solve this problem. By categorising ear images by shape and establishing the corresponding relationship between keypoints from ear images and regions (regional cluster) on the directional proposals that can be arranged to roughly face the ear image, the corresponding keypoints are obtained. Then, ear recognition is performed by combining corresponding keypoints and a multi-keypoint descriptor sparse representation classification method. The experimental results conducted on the University of Notre Dame Collection J2 dataset yielded a rank-1 recognition rate of 98.84%; furthermore, the time for one identification operation shared by each gallery subject was 0.047 ms.

当图库中每个人只有一个样本(OSPP)注册时,耳识别方法难以充分有效地缩小匹配特征的搜索范围,从而导致计算效率低和不匹配问题。为了解决这一问题,提出了一种基于OSPP的三维耳生物识别系统。通过对耳图像进行形状分类,在大致面向耳图像的可排列方向建议上,建立耳图像关键点与区域(区域聚类)的对应关系,得到相应的关键点。然后,将相应的关键点与多关键点描述符稀疏表示分类方法相结合进行人耳识别。在University of Notre Dame Collection J2数据集上进行的实验结果显示,rank-1识别率为98.84%;此外,每个画廊受试者共享一次识别操作的时间为0.047 ms。
{"title":"Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person","authors":"Qinping Zhu,&nbsp;Zhichun Mu,&nbsp;Li Yuan","doi":"10.1049/bme2.12067","DOIUrl":"https://doi.org/10.1049/bme2.12067","url":null,"abstract":"<p>When only one sample per person (OSPP) is registered in the gallery, it is difficult for ear recognition methods to sufficiently and effectively reduce the search range of the matching features, thus resulting in low computational efficiency and mismatch problems. A 3D ear biometric system using OSPP is proposed to solve this problem. By categorising ear images by shape and establishing the corresponding relationship between keypoints from ear images and regions (regional cluster) on the directional proposals that can be arranged to roughly face the ear image, the corresponding keypoints are obtained. Then, ear recognition is performed by combining corresponding keypoints and a multi-keypoint descriptor sparse representation classification method. The experimental results conducted on the University of Notre Dame Collection J2 dataset yielded a rank-1 recognition rate of 98.84%; furthermore, the time for one identification operation shared by each gallery subject was 0.047 ms.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91794343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person 对应的关键点约束稀疏表示三维人耳识别,每个人一个样本
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2022-03-02 DOI: 10.1049/bme2.12067
Qi Zhu, Zhichun Mu, Li Yuan
{"title":"Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person","authors":"Qi Zhu, Zhichun Mu, Li Yuan","doi":"10.1049/bme2.12067","DOIUrl":"https://doi.org/10.1049/bme2.12067","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81088837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
IET Biometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1