首页 > 最新文献

IET Biometrics最新文献

英文 中文
Point-convolution-based human skeletal pose estimation on millimetre wave frequency modulated continuous wave multiple-input multiple-output radar 基于点卷积的毫米波调频连续波多输入多输出雷达人体骨骼位姿估计
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-13 DOI: 10.1049/bme2.12081
Jinxiao Zhong, Liangnian Jin, Ran Wang

Compared with traditional approaches that used vision sensors which can provide a high-resolution representation of targets, millimetre-wave radar is robust to scene lighting and weather conditions, and has more applications. Current methods of human skeletal pose estimation can reconstruct targets, but they lose the spatial information or don't take the density of point cloud into consideration. We propose a skeletal pose estimation method that combines point convolution to extract features from the point cloud. By extracting the local information and density of each point in the point cloud of the target, the spatial location and structure information of the target can be obtained, and the accuracy of the pose estimation is increased. The extraction of point cloud features is based on point-by-point convolution, that is, different weights are applied to different features of each point, which also increases the nonlinear expression ability of the model. Experiments show that the proposed approach is effective. We offer more distinct skeletal joints and a lower mean absolute error, average localisation errors of 6.1 cm in X, 3.5 cm in Y and 3.3 cm in Z, respectively.

与传统的利用视觉传感器提供高分辨率目标表示的方法相比,毫米波雷达具有对场景光照和天气条件的鲁棒性,具有更广泛的应用前景。现有的人体骨骼姿态估计方法虽然可以重建目标,但缺乏空间信息或没有考虑点云的密度。提出了一种结合点卷积提取点云特征的骨骼姿态估计方法。通过提取目标点云中各点的局部信息和密度,获得目标的空间位置和结构信息,提高姿态估计的精度。点云特征的提取基于逐点卷积,即对每个点的不同特征施加不同的权值,这也增加了模型的非线性表达能力。实验表明,该方法是有效的。我们提供了更清晰的骨骼关节和更低的平均绝对误差,平均定位误差分别为X的6.1厘米,Y的3.5厘米和Z的3.3厘米。
{"title":"Point-convolution-based human skeletal pose estimation on millimetre wave frequency modulated continuous wave multiple-input multiple-output radar","authors":"Jinxiao Zhong,&nbsp;Liangnian Jin,&nbsp;Ran Wang","doi":"10.1049/bme2.12081","DOIUrl":"10.1049/bme2.12081","url":null,"abstract":"<p>Compared with traditional approaches that used vision sensors which can provide a high-resolution representation of targets, millimetre-wave radar is robust to scene lighting and weather conditions, and has more applications. Current methods of human skeletal pose estimation can reconstruct targets, but they lose the spatial information or don't take the density of point cloud into consideration. We propose a skeletal pose estimation method that combines point convolution to extract features from the point cloud. By extracting the local information and density of each point in the point cloud of the target, the spatial location and structure information of the target can be obtained, and the accuracy of the pose estimation is increased. The extraction of point cloud features is based on point-by-point convolution, that is, different weights are applied to different features of each point, which also increases the nonlinear expression ability of the model. Experiments show that the proposed approach is effective. We offer more distinct skeletal joints and a lower mean absolute error, average localisation errors of 6.1 cm in <i>X</i>, 3.5 cm in <i>Y</i> and 3.3 cm in <i>Z</i>, respectively.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"333-342"},"PeriodicalIF":2.0,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91101921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analysis of the synthetic periocular iris images for robust Presentation Attacks Detection algorithms 合成虹膜图像的鲁棒呈现攻击检测算法分析
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-07 DOI: 10.1049/bme2.12084
Jose Maureira, Juan E. Tapia, Claudia Arellano, Christoph Busch

The LivDet-2020 competition focuses on Presentation Attacks Detection (PAD) algorithms, has still open problems, mainly unknown attack scenarios. It is crucial to enhance PAD methods. This can be achieved by augmenting the number of Presentation Attack Instruments (PAI) and Bona fide (genuine) images used to train such algorithms. Unfortunately, the capture and creation of PAI and even the capture of Bona fide images are sometimes complex to achieve. The generation of synthetic images with Generative Adversarial Networks (GAN) algorithms may help and has shown significant improvements in recent years. This paper presents a benchmark of GAN methods to achieve a novel synthetic PAI from a small set of periocular near-infrared images. The best PAI was obtained using StyleGAN2, and it was tested using the best PAD algorithm from the LivDet-2020. The synthetic PAI was able to fool such an algorithm. As a result, all images were classified as Bona fide. A MobileNetV2 was trained using the synthetic PAI as a new class to achieve a more robust PAD. The resulting PAD was able to classify 96.7% of synthetic images as attacks. BPCER10 was 0.24%. Such results demonstrated the need for PAD algorithms to be constantly updated and trained with synthetic images.

LivDet-2020竞赛的重点是呈现攻击检测(PAD)算法,目前仍有开放性问题,主要是未知的攻击场景。改进PAD方法至关重要。这可以通过增加用于训练此类算法的表示攻击工具(PAI)和真实图像的数量来实现。不幸的是,捕获和创建PAI,甚至捕获真实图像有时都很复杂。使用生成对抗网络(GAN)算法生成合成图像可能会有所帮助,并且近年来已经显示出显着的改进。本文提出了一种基于GAN方法的基准算法,从一小组眼周近红外图像中实现一种新的合成PAI。使用StyleGAN2获得最佳PAI,并使用LivDet-2020中的最佳PAI算法进行测试。合成PAI能够骗过这样的算法。结果,所有图像都被归类为真实图像。使用合成PAI作为新类训练MobileNetV2,以实现更健壮的PAD。由此产生的PAD能够将96.7%的合成图像分类为攻击。BPCER10为0.24%。这些结果表明,需要不断更新和训练合成图像的PAD算法。
{"title":"Analysis of the synthetic periocular iris images for robust Presentation Attacks Detection algorithms","authors":"Jose Maureira,&nbsp;Juan E. Tapia,&nbsp;Claudia Arellano,&nbsp;Christoph Busch","doi":"10.1049/bme2.12084","DOIUrl":"10.1049/bme2.12084","url":null,"abstract":"<p>The LivDet-2020 competition focuses on Presentation Attacks Detection (PAD) algorithms, has still open problems, mainly unknown attack scenarios. It is crucial to enhance PAD methods. This can be achieved by augmenting the number of Presentation Attack Instruments (PAI) and Bona fide (genuine) images used to train such algorithms. Unfortunately, the capture and creation of PAI and even the capture of Bona fide images are sometimes complex to achieve. The generation of synthetic images with Generative Adversarial Networks (GAN) algorithms may help and has shown significant improvements in recent years. This paper presents a benchmark of GAN methods to achieve a novel synthetic PAI from a small set of periocular near-infrared images. The best PAI was obtained using StyleGAN2, and it was tested using the best PAD algorithm from the LivDet-2020. The synthetic PAI was able to fool such an algorithm. As a result, all images were classified as Bona fide. A MobileNetV2 was trained using the synthetic PAI as a new class to achieve a more robust PAD. The resulting PAD was able to classify 96.7% of synthetic images as attacks. BPCER<sub>10</sub> was 0.24%. Such results demonstrated the need for PAD algorithms to be constantly updated and trained with synthetic images.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"343-354"},"PeriodicalIF":2.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12084","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82133166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multiresolution synthetic fingerprint generation 多分辨率合成指纹生成
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-03 DOI: 10.1049/bme2.12083
Andre Brasil Vieira Wyzykowski, Mauricio Pamplona Segundo, Rubisley de Paula Lemes

Public access to existing high-resolution databases was discontinued. Besides, a hybrid database that contains fingerprints of different sensors with high and medium resolutions does not exist. A novel hybrid approach to synthesise realistic, multiresolution, and multisensor fingerprints to address these issues is presented. The first step was to improve Anguli, a handcrafted fingerprint generator, to create pores, scratches, and dynamic ridge maps. Using CycleGAN, then the maps are converted into realistic fingerprints, adding textures to images. Unlike other neural network-based methods, the authors’ method generates multiple images with different resolutions and styles for the same identity. With the authors’ approach, a synthetic database with 14,800 fingerprints is built. Besides that, fingerprint recognition experiments with pore- and minutiae-based matching techniques and different fingerprint quality analyses are conducted to confirm the similarity between real and synthetic databases. Finally, a human classification analysis is performed, where volunteers could not distinguish between authentic and synthetic fingerprints. These experiments demonstrate that the authors’ approach is suitable for supporting further fingerprint recognition studies in the absence of real databases.

现有的高分辨率数据库已停止向公众开放。此外,不存在包含不同传感器的高、中分辨率指纹的混合数据库。提出了一种新的混合方法来合成逼真的、多分辨率的和多传感器的指纹来解决这些问题。第一步是改进Anguli,这是一个手工制作的指纹生成器,可以创建毛孔、划痕和动态脊图。然后使用CycleGAN将地图转换成逼真的指纹,并为图像添加纹理。与其他基于神经网络的方法不同,作者的方法为同一身份生成具有不同分辨率和风格的多幅图像。利用作者的方法,建立了一个包含14800个指纹的合成数据库。此外,还进行了基于孔隙和微孔匹配技术的指纹识别实验以及不同指纹质量分析,以验证真实数据库与合成数据库的相似性。最后,进行人类分类分析,志愿者无法区分真实指纹和合成指纹。这些实验表明,作者的方法适合在没有真实数据库的情况下支持进一步的指纹识别研究。
{"title":"Multiresolution synthetic fingerprint generation","authors":"Andre Brasil Vieira Wyzykowski,&nbsp;Mauricio Pamplona Segundo,&nbsp;Rubisley de Paula Lemes","doi":"10.1049/bme2.12083","DOIUrl":"10.1049/bme2.12083","url":null,"abstract":"<p>Public access to existing high-resolution databases was discontinued. Besides, a hybrid database that contains fingerprints of different sensors with high and medium resolutions does not exist. A novel hybrid approach to synthesise realistic, multiresolution, and multisensor fingerprints to address these issues is presented. The first step was to improve Anguli, a handcrafted fingerprint generator, to create pores, scratches, and dynamic ridge maps. Using CycleGAN, then the maps are converted into realistic fingerprints, adding textures to images. Unlike other neural network-based methods, the authors’ method generates multiple images with different resolutions and styles for the same identity. With the authors’ approach, a synthetic database with 14,800 fingerprints is built. Besides that, fingerprint recognition experiments with pore- and minutiae-based matching techniques and different fingerprint quality analyses are conducted to confirm the similarity between real and synthetic databases. Finally, a human classification analysis is performed, where volunteers could not distinguish between authentic and synthetic fingerprints. These experiments demonstrate that the authors’ approach is suitable for supporting further fingerprint recognition studies in the absence of real databases.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"314-332"},"PeriodicalIF":2.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77469700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Forearm multimodal recognition based on IAHP-entropy weight combination 基于IAHP熵权组合的前臂多模态识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-27 DOI: 10.1049/bme2.12080
Chaoying Tang, Mengen Qian, Ru Jia, Haodong Liu, Biao Wang

Biometrics are the among most popular authentication methods due to their advantages over traditional methods, such as higher security, better accuracy and more convenience. The recent COVID-19 pandemic has led to the wide use of face masks, which greatly affects the traditional face recognition technology. The pandemic has also increased the focus on hygienic and contactless identity verification methods. The forearm is a new biometric that contains discriminative information. In this paper, we proposed a multimodal recognition method that combines the veins and geometry of a forearm. Five features are extracted from a forearm Near-Infrared (Near-Infrared) image: SURF, local line structures, global graph representations, forearm width feature and forearm boundary feature. These features are matched individually and then fused at the score level based on the Improved Analytic Hierarchy Process-entropy weight combination. Comprehensive experiments were carried out to evaluate the proposed recognition method and the fusion rule. The matching results showed that the proposed method can achieve a satisfactory performance.

生物识别是最受欢迎的身份验证方法之一,因为它比传统方法具有更高的安全性、更好的准确性和更方便等优点。最近的新冠肺炎大流行导致口罩的广泛使用,这对传统的人脸识别技术产生了很大影响。新冠疫情还增加了对卫生和非接触式身份验证方法的关注。前臂是一种新的生物特征,它包含有判别信息。在本文中,我们提出了一种结合前臂静脉和几何形状的多模式识别方法。从前臂近红外图像中提取了五个特征:SURF、局部线结构、全局图表示、前臂宽度特征和前臂边界特征。这些特征被单独匹配,然后基于改进的层次分析过程熵权组合在分数水平上融合。对所提出的识别方法和融合规则进行了综合实验评价。匹配结果表明,该方法可以获得令人满意的性能。
{"title":"Forearm multimodal recognition based on IAHP-entropy weight combination","authors":"Chaoying Tang,&nbsp;Mengen Qian,&nbsp;Ru Jia,&nbsp;Haodong Liu,&nbsp;Biao Wang","doi":"10.1049/bme2.12080","DOIUrl":"https://doi.org/10.1049/bme2.12080","url":null,"abstract":"<p>Biometrics are the among most popular authentication methods due to their advantages over traditional methods, such as higher security, better accuracy and more convenience. The recent COVID-19 pandemic has led to the wide use of face masks, which greatly affects the traditional face recognition technology. The pandemic has also increased the focus on hygienic and contactless identity verification methods. The forearm is a new biometric that contains discriminative information. In this paper, we proposed a multimodal recognition method that combines the veins and geometry of a forearm. Five features are extracted from a forearm Near-Infrared (Near-Infrared) image: SURF, local line structures, global graph representations, forearm width feature and forearm boundary feature. These features are matched individually and then fused at the score level based on the Improved Analytic Hierarchy Process-entropy weight combination. Comprehensive experiments were carried out to evaluate the proposed recognition method and the fusion rule. The matching results showed that the proposed method can achieve a satisfactory performance.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"52-63"},"PeriodicalIF":2.0,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50146449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards pen-holding hand pose recognition: A new benchmark and a coarse-to-fine PHHP recognition network 面向握笔姿势识别:一个新的基准和粗到精的php识别网络
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-17 DOI: 10.1049/bme2.12079
Pingping Wu, Lunke Fei, Shuyi Li, Shuping Zhao, Xiaozhao Fang, Shaohua Teng

Hand pose recognition has been one of the most fundamental tasks in computer vision and pattern recognition, and substantial effort has been devoted to this field. However, owing to lack of public large-scale benchmark dataset, there is little literature to specially study pen-holding hand pose (PHHP) recognition. As an attempt to fill this gap, in this paper, a PHHP image dataset, consisting of 18,000 PHHP samples is established. To the best of the authors’ knowledge, this is the largest vision-based PHHP dataset ever collected. Furthermore, the authors design a coarse-to-fine PHHP recognition network consisting of a coarse multi-feature learning network and a fine pen-grasping-specific feature learning network, where the coarse learning network aims to extensively exploit the multiple discriminative features by sharing a hand-shape-based spatial attention information, and the fine learning network further learns the pen-grasping-specific features by embedding a couple of convolutional block attention modules into three convolution blocks models. Experimental results show that the authors’ proposed method can achieve a very competitive PHHP recognition performance when compared with the baseline recognition models.

手部姿态识别一直是计算机视觉和模式识别中最基本的任务之一,这一领域已经投入了大量的努力。然而,由于缺乏公开的大规模基准数据集,专门研究握笔姿势识别的文献很少。为了填补这一空白,本文建立了一个由18000个PHHP样本组成的PHHP图像数据集。据作者所知,这是迄今为止收集到的最大的基于视觉的php数据集。此外,作者还设计了一个由粗多特征学习网络和精细抓笔特征学习网络组成的粗到细PHHP识别网络,其中粗学习网络旨在通过共享基于手形的空间注意信息,广泛利用多个判别特征;精细学习网络通过在三个卷积块模型中嵌入两个卷积块注意模块,进一步学习抓笔特定的特征。实验结果表明,与基线识别模型相比,本文提出的方法具有很好的PHHP识别性能。
{"title":"Towards pen-holding hand pose recognition: A new benchmark and a coarse-to-fine PHHP recognition network","authors":"Pingping Wu,&nbsp;Lunke Fei,&nbsp;Shuyi Li,&nbsp;Shuping Zhao,&nbsp;Xiaozhao Fang,&nbsp;Shaohua Teng","doi":"10.1049/bme2.12079","DOIUrl":"10.1049/bme2.12079","url":null,"abstract":"<p>Hand pose recognition has been one of the most fundamental tasks in computer vision and pattern recognition, and substantial effort has been devoted to this field. However, owing to lack of public large-scale benchmark dataset, there is little literature to specially study pen-holding hand pose (PHHP) recognition. As an attempt to fill this gap, in this paper, a PHHP image dataset, consisting of 18,000 PHHP samples is established. To the best of the authors’ knowledge, this is the largest vision-based PHHP dataset ever collected. Furthermore, the authors design a coarse-to-fine PHHP recognition network consisting of a coarse multi-feature learning network and a fine pen-grasping-specific feature learning network, where the coarse learning network aims to extensively exploit the multiple discriminative features by sharing a hand-shape-based spatial attention information, and the fine learning network further learns the pen-grasping-specific features by embedding a couple of convolutional block attention modules into three convolution blocks models. Experimental results show that the authors’ proposed method can achieve a very competitive PHHP recognition performance when compared with the baseline recognition models.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 6","pages":"581-587"},"PeriodicalIF":2.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75725455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition of human Iris for biometric identification using Daugman’s method 用道格曼方法识别人体虹膜进行生物特征识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-14 DOI: 10.1049/bme2.12074
Reend Tawfik Mohammed, Harleen Kaur, Bhavya Alankar, Ritu Chauhan

Iris identification is a well-known technology used to detect striking biometric identification procedures for recognizing human beings based on physical behaviour. The texture of the iris is unique and its anatomy varies from individual to individual. As we know, the physical features of human beings are unique, and they never change; this has led to a significant development in the field of iris recognition. Iris recognition tends to be a reliable domain of technology as it inherits the random variation of the data. In the proposed study of approach, we have designed and implemented a framework using various subsystems, where each phase relates to the other iris recognition system, and these stages are discussed as segmentation, normalisation, and feature encoding. The study is implemented using MATLAB where the results are outcast using the rapid application development (RAD) approach. We have applied the RAD domain, as it has an excellent computing power to generate expeditious results using complex coding, image processing toolbox, and high-level programing methodology. Further, the performance of the technology is tested on two informational groups of eye images MMU Iris database, CASIA V1, CASIA V2, MICHE I, MICHE II iris database, and images captured by iPhone camera and Android phone. The emphasis on the current study of approach is to apply the proposed algorithm to achieve high performance with less ideal conditions.

虹膜识别是一项众所周知的技术,用于检测基于身体行为识别人类的惊人生物特征识别程序。虹膜的纹理是独特的,其解剖结构因人而异。正如我们所知,人类的身体特征是独一无二的,它们永远不会改变;这导致了虹膜识别领域的重大发展。虹膜识别是一个可靠的技术领域,因为它继承了随机变化的数据。在提出的方法研究中,我们设计并实现了一个使用各种子系统的框架,其中每个阶段都与其他虹膜识别系统相关,并且这些阶段被讨论为分割,归一化和特征编码。该研究是使用MATLAB实现的,使用快速应用程序开发(RAD)方法对结果进行丢弃。我们已经应用了RAD领域,因为它具有出色的计算能力,可以使用复杂的编码、图像处理工具箱和高级编程方法生成快速的结果。进一步,在MMU虹膜数据库、CASIA V1、CASIA V2、MICHE I、MICHE II两组人眼图像以及iPhone相机和Android手机拍摄的图像上对该技术的性能进行了测试。目前研究的重点是如何在理想条件较少的情况下实现高性能。
{"title":"Recognition of human Iris for biometric identification using Daugman’s method","authors":"Reend Tawfik Mohammed,&nbsp;Harleen Kaur,&nbsp;Bhavya Alankar,&nbsp;Ritu Chauhan","doi":"10.1049/bme2.12074","DOIUrl":"10.1049/bme2.12074","url":null,"abstract":"<p>Iris identification is a well-known technology used to detect striking biometric identification procedures for recognizing human beings based on physical behaviour. The texture of the iris is unique and its anatomy varies from individual to individual. As we know, the physical features of human beings are unique, and they never change; this has led to a significant development in the field of iris recognition. Iris recognition tends to be a reliable domain of technology as it inherits the random variation of the data. In the proposed study of approach, we have designed and implemented a framework using various subsystems, where each phase relates to the other iris recognition system, and these stages are discussed as segmentation, normalisation, and feature encoding. The study is implemented using MATLAB where the results are outcast using the rapid application development (RAD) approach. We have applied the RAD domain, as it has an excellent computing power to generate expeditious results using complex coding, image processing toolbox, and high-level programing methodology. Further, the performance of the technology is tested on two informational groups of eye images MMU Iris database, CASIA V1, CASIA V2, MICHE I, MICHE II iris database, and images captured by iPhone camera and Android phone. The emphasis on the current study of approach is to apply the proposed algorithm to achieve high performance with less ideal conditions.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"304-313"},"PeriodicalIF":2.0,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89044719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Breast mass classification based on supervised contrastive learning and multi-view consistency penalty on mammography 基于监督对比学习和多视点一致性惩罚的乳腺肿块分类
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-12 DOI: 10.1049/bme2.12076
Lilei Sun, Jie Wen, Junqian Wang, Zheng Zhang, Yong Zhao, Guiying Zhang, Yong Xu

Breast cancer accounts for the largest number of patients among all cancers in the world. Intervention treatment for early breast cancer can dramatically extend a woman's 5-year survival rate. However, the lack of public available breast mammography databases in the field of Computer-aided Diagnosis and the insufficient feature extraction ability from breast mammography limit the diagnostic performance of breast cancer. In this paper, A novel classification algorithm based on Convolutional Neural Network (CNN) is proposed to improve the diagnostic performance for breast cancer on mammography. A multi-view network is designed to extract the complementary information between the Craniocaudal (CC) and Mediolateral Oblique (MLO) mammographic views of a breast mass. For the different predictions of the features extracted from the CC view and MLO view of the same breast mass, the proposed algorithm forces the network to extract the consistent features from the two views by the cross-entropy function with an added consistent penalty term. To exploit the discriminative features from the insufficient mammographic images, the authors learnt an encoder in the classification model to learn the invariable representations from the mammographic breast mass by Supervised Contrastive Learning (SCL) to weaken the side effect of colour jitter and illumination of mammographic breast mass on image quality degradation. The experimental results of all the classification algorithms mentioned in this paper on Digital Database for Screening Mammography (DDSM) illustrate that the proposed algorithm greatly improves the classification performance and diagnostic speed of mammographic breast mass, which is of great significance for breast cancer diagnosis.

在世界上所有癌症中,乳腺癌患者人数最多。早期乳腺癌的干预治疗可以显著延长女性的5年生存率。然而,计算机辅助诊断领域缺乏公开的乳腺x线摄影数据库,乳腺x线摄影特征提取能力不足,限制了乳腺癌的诊断效果。本文提出了一种新的基于卷积神经网络(CNN)的分类算法,以提高乳房x光检查对乳腺癌的诊断性能。设计了一个多视图网络来提取乳腺肿块的颅侧(CC)和中外侧斜位(MLO)乳房x线摄影视图之间的互补信息。针对从同一乳腺肿块的CC视图和MLO视图中提取的特征预测不同的问题,该算法通过增加一致性惩罚项的交叉熵函数强制网络从两个视图中提取一致性特征。为了从不足的乳房x线影像中挖掘出判别特征,作者在分类模型中学习了一个编码器,通过监督对比学习(SCL)从乳房x线影像肿块中学习不变表征,以减弱乳房x线影像肿块颜色抖动和光照对图像质量下降的副作用。本文提到的所有分类算法在DDSM (Digital Database for Screening Mammography)上的实验结果表明,本文提出的算法大大提高了乳腺肿块的分类性能和诊断速度,对乳腺癌的诊断具有重要意义。
{"title":"Breast mass classification based on supervised contrastive learning and multi-view consistency penalty on mammography","authors":"Lilei Sun,&nbsp;Jie Wen,&nbsp;Junqian Wang,&nbsp;Zheng Zhang,&nbsp;Yong Zhao,&nbsp;Guiying Zhang,&nbsp;Yong Xu","doi":"10.1049/bme2.12076","DOIUrl":"10.1049/bme2.12076","url":null,"abstract":"<p>Breast cancer accounts for the largest number of patients among all cancers in the world. Intervention treatment for early breast cancer can dramatically extend a woman's 5-year survival rate. However, the lack of public available breast mammography databases in the field of Computer-aided Diagnosis and the insufficient feature extraction ability from breast mammography limit the diagnostic performance of breast cancer. In this paper, A novel classification algorithm based on Convolutional Neural Network (CNN) is proposed to improve the diagnostic performance for breast cancer on mammography. A multi-view network is designed to extract the complementary information between the Craniocaudal (CC) and Mediolateral Oblique (MLO) mammographic views of a breast mass. For the different predictions of the features extracted from the CC view and MLO view of the same breast mass, the proposed algorithm forces the network to extract the consistent features from the two views by the cross-entropy function with an added consistent penalty term. To exploit the discriminative features from the insufficient mammographic images, the authors learnt an encoder in the classification model to learn the invariable representations from the mammographic breast mass by Supervised Contrastive Learning (SCL) to weaken the side effect of colour jitter and illumination of mammographic breast mass on image quality degradation. The experimental results of all the classification algorithms mentioned in this paper on Digital Database for Screening Mammography (DDSM) illustrate that the proposed algorithm greatly improves the classification performance and diagnostic speed of mammographic breast mass, which is of great significance for breast cancer diagnosis.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 6","pages":"588-600"},"PeriodicalIF":2.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89203516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Masked face recognition: Human versus machine 蒙面人脸识别:人类与机器的对抗
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-07 DOI: 10.1049/bme2.12077
Naser Damer, Fadi Boutros, Marius Süßmilch, Meiling Fang, Florian Kirchbuchner, Arjan Kuijper

The recent COVID-19 pandemic has increased the focus on hygienic and contactless identity verification methods. However, the pandemic led to the wide use of face masks, essential to keep the pandemic under control. The effect of wearing a mask on face recognition (FR) in a collaborative environment is a currently sensitive yet understudied issue. Recent reports have tackled this by evaluating the masked probe effect on the performance of automatic FR solutions. However, such solutions can fail in certain processes, leading to the verification task being performed by a human expert. This work provides a joint evaluation and in-depth analyses of the face verification performance of human experts in comparison to state-of-the-art automatic FR solutions. This involves an extensive evaluation by human experts and 4 automatic recognition solutions. The study concludes with a set of take-home messages on different aspects of the correlation between the verification behaviour of humans and machines.

最近的COVID-19大流行增加了对卫生和非接触式身份验证方法的关注。然而,这次大流行导致人们广泛使用口罩,这对控制疫情至关重要。在协作环境中,戴口罩对人脸识别的影响是目前一个敏感但尚未得到充分研究的问题。最近的报告通过评估屏蔽探针对自动FR解决方案性能的影响来解决这个问题。然而,这样的解决方案在某些过程中可能会失败,从而导致由人类专家执行验证任务。与最先进的自动人脸识别解决方案相比,这项工作提供了对人类专家人脸验证性能的联合评估和深入分析。这涉及到人类专家的广泛评估和4个自动识别解决方案。该研究总结了一系列关于人类和机器验证行为之间相关性的不同方面的信息。
{"title":"Masked face recognition: Human versus machine","authors":"Naser Damer,&nbsp;Fadi Boutros,&nbsp;Marius Süßmilch,&nbsp;Meiling Fang,&nbsp;Florian Kirchbuchner,&nbsp;Arjan Kuijper","doi":"10.1049/bme2.12077","DOIUrl":"10.1049/bme2.12077","url":null,"abstract":"<p>The recent COVID-19 pandemic has increased the focus on hygienic and contactless identity verification methods. However, the pandemic led to the wide use of face masks, essential to keep the pandemic under control. The effect of wearing a mask on face recognition (FR) in a collaborative environment is a currently sensitive yet understudied issue. Recent reports have tackled this by evaluating the masked probe effect on the performance of automatic FR solutions. However, such solutions can fail in certain processes, leading to the verification task being performed by a human expert. This work provides a joint evaluation and in-depth analyses of the face verification performance of human experts in comparison to state-of-the-art automatic FR solutions. This involves an extensive evaluation by human experts and 4 automatic recognition solutions. The study concludes with a set of take-home messages on different aspects of the correlation between the verification behaviour of humans and machines.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"512-528"},"PeriodicalIF":2.0,"publicationDate":"2022-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90566262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Lip print-based identification using traditional and deep learning 基于传统和深度学习的唇印识别
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-05 DOI: 10.1049/bme2.12073
Wardah Farrukh, Dustin van der Haar

The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity of individuals on occasions where the above techniques are unavailable. One emerging method of human recognition is lip-based identification. It can be treated as a new kind of biometric measure. The patterns found on the human lip are permanent unless subjected to alternations or trauma. Therefore, lip prints can serve the purpose of confirming an individual's identity. The main objective of this work is to design experiments using computer vision methods that can recognise an individual solely based on their lip prints. This article compares traditional and deep learning computer vision methods and how they perform on a common dataset for lip-based identification. The first pipeline is a traditional method with Speeded Up Robust Features with either an SVM or K-NN machine learning classifier, which achieved an accuracy of 95.45% and 94.31%, respectively. A second pipeline compares the performance of the VGG16 and VGG19 deep learning architectures. This approach obtained an accuracy of 91.53% and 93.22%, respectively.

生物特征识别的概念围绕着这样一种理论,即每个人都是独一无二的,都有独特的特征。为此采用了指纹、人脸、虹膜或视网膜等各种指标。尽管如此,在无法使用上述技术的情况下,仍需要新的替代方案来确定个人身份。一种新兴的人类识别方法是基于嘴唇的识别。它可以被视为一种新的生物特征测量。在人类嘴唇上发现的图案是永久性的,除非受到改变或创伤。因此,唇印可以起到确认个人身份的作用。这项工作的主要目标是使用计算机视觉方法设计实验,该方法可以仅根据个人的唇印识别个人。本文比较了传统的和深度学习的计算机视觉方法,以及它们在基于嘴唇识别的通用数据集上的表现。第一个流水线是一种具有加速鲁棒特征的传统方法,使用SVM或K-NN机器学习分类器,其准确率分别为95.45%和94.31%。第二条流水线比较了VGG16和VGG19深度学习架构的性能。该方法的准确率分别为91.53%和93.22%。
{"title":"Lip print-based identification using traditional and deep learning","authors":"Wardah Farrukh,&nbsp;Dustin van der Haar","doi":"10.1049/bme2.12073","DOIUrl":"https://doi.org/10.1049/bme2.12073","url":null,"abstract":"<p>The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity of individuals on occasions where the above techniques are unavailable. One emerging method of human recognition is lip-based identification. It can be treated as a new kind of biometric measure. The patterns found on the human lip are permanent unless subjected to alternations or trauma. Therefore, lip prints can serve the purpose of confirming an individual's identity. The main objective of this work is to design experiments using computer vision methods that can recognise an individual solely based on their lip prints. This article compares traditional and deep learning computer vision methods and how they perform on a common dataset for lip-based identification. The first pipeline is a traditional method with Speeded Up Robust Features with either an SVM or K-NN machine learning classifier, which achieved an accuracy of 95.45% and 94.31%, respectively. A second pipeline compares the performance of the VGG16 and VGG19 deep learning architectures. This approach obtained an accuracy of 91.53% and 93.22%, respectively.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"1-12"},"PeriodicalIF":2.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50121827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Time–frequency fusion learning for photoplethysmography biometric recognition 光体积脉搏波生物识别的时频融合学习
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-12 DOI: 10.1049/bme2.12070
Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang

Photoplethysmography (PPG) signal is a novel biometric trait related to the identity of people; many time- and frequency-domain methods for PPG biometric recognition have been proposed. However, the existing domain methods for PPG biometric recognition only consider a single domain or the feature-level fusion of time and frequency domains, without considering the exploration of the fusion correlations of the time and frequency domains. The authors propose a time–frequency fusion for a PPG biometric recognition method with collective matrix factorisation (TFCMF) that leverages collective matrix factorisation to learn a shared latent semantic space by exploring the fusion correlations of the time and frequency domains. In addition, the authors utilise the 2,1 norm to constrain the reconstruction error and shared matrix, which can alleviate the influence of noise and intra-class variation, and ensure the robustness of learnt semantic space. Experiments demonstrate that TFCMF has better recognition performance than current state-of-the-art methods for PPG biometric recognition.

光体积脉搏波(PPG)信号是一种与人的身份相关的新型生物特征。提出了多种时域和频域的PPG生物特征识别方法。然而,现有的PPG生物特征识别的域方法只考虑单个域或特征级的时频融合,没有考虑对时频融合相关性的探索。作者提出了一种基于集体矩阵分解(TFCMF)的PPG生物特征识别方法的时频融合,该方法利用集体矩阵分解通过探索时域和频域的融合相关性来学习共享的潜在语义空间。此外,利用1,1范数约束重构误差和共享矩阵,减轻了噪声和类内变化的影响,保证了学习到的语义空间的鲁棒性。实验表明,TFCMF在PPG生物特征识别中具有比目前最先进的识别方法更好的识别性能。
{"title":"Time–frequency fusion learning for photoplethysmography biometric recognition","authors":"Chunying Liu,&nbsp;Jijiang Yu,&nbsp;Yuwen Huang,&nbsp;Fuxian Huang","doi":"10.1049/bme2.12070","DOIUrl":"https://doi.org/10.1049/bme2.12070","url":null,"abstract":"<p>Photoplethysmography (PPG) signal is a novel biometric trait related to the identity of people; many time- and frequency-domain methods for PPG biometric recognition have been proposed. However, the existing domain methods for PPG biometric recognition only consider a single domain or the feature-level fusion of time and frequency domains, without considering the exploration of the fusion correlations of the time and frequency domains. The authors propose a time–frequency fusion for a PPG biometric recognition method with collective matrix factorisation (TFCMF) that leverages collective matrix factorisation to learn a shared latent semantic space by exploring the fusion correlations of the time and frequency domains. In addition, the authors utilise the <i>ℓ</i><sub>2,1</sub> norm to constrain the reconstruction error and shared matrix, which can alleviate the influence of noise and intra-class variation, and ensure the robustness of learnt semantic space. Experiments demonstrate that TFCMF has better recognition performance than current state-of-the-art methods for PPG biometric recognition.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"187-198"},"PeriodicalIF":2.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91827864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IET Biometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1