IIS-FVIQA:利用类内和类间相似性进行手指静脉图像质量评估

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pattern Recognition Pub Date : 2024-09-29 DOI:10.1016/j.patcog.2024.111056
Hengyi Ren , Lijuan Sun , Xijian Fan , Ying Cao , Qiaolin Ye
{"title":"IIS-FVIQA:利用类内和类间相似性进行手指静脉图像质量评估","authors":"Hengyi Ren ,&nbsp;Lijuan Sun ,&nbsp;Xijian Fan ,&nbsp;Ying Cao ,&nbsp;Qiaolin Ye","doi":"10.1016/j.patcog.2024.111056","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, Finger Vein Image Quality Assessment (FVIQA) has been recognized as an effective solution to the problem of erroneous recognition resulting from low image quality due to false and missing information in finger vein images, and has become an important part of finger vein recognition systems. Compared to traditional FVIQA methods that rely on domain knowledge, newer methods that reject low-quality images have been favored for their independence from human interference. However, these methods only consider intra-class similarity information and ignore valuable information from inter-class distribution, which is also an important factor in evaluating the performance of recognition systems. In this work, we propose a novel FVIQA approach, named IIS-FVIQA, which concurrently takes into account the intra-class similarity density and inter-class similarity distribution distance within recognition systems. Specifically, our method generates quality scores for finger vein images by combining the information entropy of intra-class similarity distribution and Wasserstein distance of inter-class distribution. Then, we train a regression network for quality prediction using training images and corresponding quality scores. When a new image enters the recognition system, the trained regression network directly predicts the quality score of the image, making it easier for the system to select the corresponding operation based on the quality score of the image. Extensive experiments conducted on benchmark datasets demonstrate that the IIS-FVIQA method proposed in this paper consistently achieves top performance across multiple public datasets. After filtering out 10% of low-quality images predicted by the quality regression network, the recognition system’s performance improves by 43.96% (SDUMLA), 32.23% (MMCBNU_6000), and 21.20% (FV-USM), respectively. Furthermore, the method exhibits strong generalizability across different recognition algorithms (e.g., LBP, MC, and Inception V3) and datasets (e.g., SDUMLA, MMCBNU_6000, and FV-USM).</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"158 ","pages":"Article 111056"},"PeriodicalIF":7.5000,"publicationDate":"2024-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"IIS-FVIQA: Finger Vein Image Quality Assessment with intra-class and inter-class similarity\",\"authors\":\"Hengyi Ren ,&nbsp;Lijuan Sun ,&nbsp;Xijian Fan ,&nbsp;Ying Cao ,&nbsp;Qiaolin Ye\",\"doi\":\"10.1016/j.patcog.2024.111056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, Finger Vein Image Quality Assessment (FVIQA) has been recognized as an effective solution to the problem of erroneous recognition resulting from low image quality due to false and missing information in finger vein images, and has become an important part of finger vein recognition systems. Compared to traditional FVIQA methods that rely on domain knowledge, newer methods that reject low-quality images have been favored for their independence from human interference. However, these methods only consider intra-class similarity information and ignore valuable information from inter-class distribution, which is also an important factor in evaluating the performance of recognition systems. In this work, we propose a novel FVIQA approach, named IIS-FVIQA, which concurrently takes into account the intra-class similarity density and inter-class similarity distribution distance within recognition systems. Specifically, our method generates quality scores for finger vein images by combining the information entropy of intra-class similarity distribution and Wasserstein distance of inter-class distribution. Then, we train a regression network for quality prediction using training images and corresponding quality scores. When a new image enters the recognition system, the trained regression network directly predicts the quality score of the image, making it easier for the system to select the corresponding operation based on the quality score of the image. Extensive experiments conducted on benchmark datasets demonstrate that the IIS-FVIQA method proposed in this paper consistently achieves top performance across multiple public datasets. After filtering out 10% of low-quality images predicted by the quality regression network, the recognition system’s performance improves by 43.96% (SDUMLA), 32.23% (MMCBNU_6000), and 21.20% (FV-USM), respectively. Furthermore, the method exhibits strong generalizability across different recognition algorithms (e.g., LBP, MC, and Inception V3) and datasets (e.g., SDUMLA, MMCBNU_6000, and FV-USM).</div></div>\",\"PeriodicalId\":49713,\"journal\":{\"name\":\"Pattern Recognition\",\"volume\":\"158 \",\"pages\":\"Article 111056\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0031320324008070\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320324008070","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

近年来,指静脉图像质量评估(Finger Vein Image Quality Assessment,FVIQA)被认为是解决因指静脉图像中虚假和缺失信息导致图像质量低而造成识别错误问题的有效方法,并已成为指静脉识别系统的重要组成部分。与依赖领域知识的传统 FVIQA 方法相比,剔除低质量图像的新方法因不受人为干扰而受到青睐。然而,这些方法只考虑了类内相似性信息,忽略了类间分布的宝贵信息,而这也是评估识别系统性能的一个重要因素。在这项工作中,我们提出了一种名为 IIS-FVIQA 的新型 FVIQA 方法,它同时考虑了识别系统中类内相似性密度和类间相似性分布距离。具体来说,我们的方法通过结合类内相似性分布的信息熵和类间分布的瓦瑟斯坦距离来生成指静脉图像的质量分数。然后,我们利用训练图像和相应的质量分数训练一个回归网络来进行质量预测。当新图像进入识别系统时,训练好的回归网络会直接预测图像的质量得分,使系统更容易根据图像的质量得分选择相应的操作。在基准数据集上进行的大量实验表明,本文提出的 IIS-FVIQA 方法在多个公共数据集上始终保持最高性能。在过滤掉质量回归网络预测的 10% 低质量图像后,识别系统的性能分别提高了 43.96% (SDUMLA)、32.23% (MMCBNU_6000) 和 21.20% (FV-USM)。此外,该方法在不同的识别算法(如 LBP、MC 和 Inception V3)和数据集(如 SDUMLA、MMCBNU_6000 和 FV-USM)中表现出很强的通用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
IIS-FVIQA: Finger Vein Image Quality Assessment with intra-class and inter-class similarity
In recent years, Finger Vein Image Quality Assessment (FVIQA) has been recognized as an effective solution to the problem of erroneous recognition resulting from low image quality due to false and missing information in finger vein images, and has become an important part of finger vein recognition systems. Compared to traditional FVIQA methods that rely on domain knowledge, newer methods that reject low-quality images have been favored for their independence from human interference. However, these methods only consider intra-class similarity information and ignore valuable information from inter-class distribution, which is also an important factor in evaluating the performance of recognition systems. In this work, we propose a novel FVIQA approach, named IIS-FVIQA, which concurrently takes into account the intra-class similarity density and inter-class similarity distribution distance within recognition systems. Specifically, our method generates quality scores for finger vein images by combining the information entropy of intra-class similarity distribution and Wasserstein distance of inter-class distribution. Then, we train a regression network for quality prediction using training images and corresponding quality scores. When a new image enters the recognition system, the trained regression network directly predicts the quality score of the image, making it easier for the system to select the corresponding operation based on the quality score of the image. Extensive experiments conducted on benchmark datasets demonstrate that the IIS-FVIQA method proposed in this paper consistently achieves top performance across multiple public datasets. After filtering out 10% of low-quality images predicted by the quality regression network, the recognition system’s performance improves by 43.96% (SDUMLA), 32.23% (MMCBNU_6000), and 21.20% (FV-USM), respectively. Furthermore, the method exhibits strong generalizability across different recognition algorithms (e.g., LBP, MC, and Inception V3) and datasets (e.g., SDUMLA, MMCBNU_6000, and FV-USM).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
期刊最新文献
Learning accurate and enriched features for stereo image super-resolution Semi-supervised multi-view feature selection with adaptive similarity fusion and learning DyConfidMatch: Dynamic thresholding and re-sampling for 3D semi-supervised learning CAST: An innovative framework for Cross-dimensional Attention Structure in Transformers Embedded feature selection for robust probability learning machines
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1