Attention Label Learning to Enhance Interactive Vein Transformer for Palm-Vein Recognition

Huafeng Qin;Changqing Gong;Yantao Li;Mounim A. El-Yacoubi;Xinbo Gao;Jun Wang
{"title":"Attention Label Learning to Enhance Interactive Vein Transformer for Palm-Vein Recognition","authors":"Huafeng Qin;Changqing Gong;Yantao Li;Mounim A. El-Yacoubi;Xinbo Gao;Jun Wang","doi":"10.1109/TBIOM.2024.3381654","DOIUrl":null,"url":null,"abstract":"In recent years, vein biometrics has gained significant attention due to its high security and privacy features. While deep neural networks have become the predominant classification approaches for their ability to automatically extract discriminative vein features, they still face certain drawbacks: 1) Existing transformer-based vein classifiers struggle to capture interactive information among different attention modules, limiting their feature representation capacity; 2) Current label enhancement methods, although effective in learning label distributions for classifier training, fail to model long-range relations between classes. To address these issues, we present ALE-IVT, an Attention Label Enhancement-based Interactive Vein Transformer for palm-vein recognition. First, to extract vein features, we propose an interactive vein transformer (IVT) consisting of three branches, namely spatial attention, channel attention, and convolutional module. In order to enhance performance, we integrate an interactive module that facilitates the sharing of discriminative features among the three branches. Second, we explore an attention-based label enhancement (ALE) approach to learn label distribution. ALE employs a self-attention mechanism to capture correlation between classes, enabling the inference of label distribution for classifier training. As self-attention can model long-range dependencies between classes, the resulting label distribution provides enhanced supervised information for training the vein classifier. Finally, we combine ALE with IVT to create ALE-IVT, trained in an end-to-end manner to boost the recognition accuracy of the IVT classifier. Our experiments on three public datasets demonstrate that our IVT model surpasses existing state-of-the-art vein classifiers. In addition, ALE outperforms current label enhancement approaches in term of recognition accuracy.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"341-351"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10479213/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, vein biometrics has gained significant attention due to its high security and privacy features. While deep neural networks have become the predominant classification approaches for their ability to automatically extract discriminative vein features, they still face certain drawbacks: 1) Existing transformer-based vein classifiers struggle to capture interactive information among different attention modules, limiting their feature representation capacity; 2) Current label enhancement methods, although effective in learning label distributions for classifier training, fail to model long-range relations between classes. To address these issues, we present ALE-IVT, an Attention Label Enhancement-based Interactive Vein Transformer for palm-vein recognition. First, to extract vein features, we propose an interactive vein transformer (IVT) consisting of three branches, namely spatial attention, channel attention, and convolutional module. In order to enhance performance, we integrate an interactive module that facilitates the sharing of discriminative features among the three branches. Second, we explore an attention-based label enhancement (ALE) approach to learn label distribution. ALE employs a self-attention mechanism to capture correlation between classes, enabling the inference of label distribution for classifier training. As self-attention can model long-range dependencies between classes, the resulting label distribution provides enhanced supervised information for training the vein classifier. Finally, we combine ALE with IVT to create ALE-IVT, trained in an end-to-end manner to boost the recognition accuracy of the IVT classifier. Our experiments on three public datasets demonstrate that our IVT model surpasses existing state-of-the-art vein classifiers. In addition, ALE outperforms current label enhancement approaches in term of recognition accuracy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过注意力标签学习增强掌静脉识别的交互式静脉变换器
近年来,静脉生物识别技术因其高度安全性和隐私性而备受关注。虽然深度神经网络因其自动提取辨别性静脉特征的能力而成为最主要的分类方法,但它们仍然面临着一些缺陷:1) 现有的基于变换器的静脉分类器难以捕捉不同注意力模块之间的交互信息,限制了其特征表示能力;2) 当前的标签增强方法虽然能有效地学习分类器训练所需的标签分布,但却无法模拟类之间的长距离关系。为了解决这些问题,我们提出了基于注意力标签增强的交互式静脉变换器 ALE-IVT,用于手掌静脉识别。首先,为了提取静脉特征,我们提出了一种交互式静脉变换器(IVT),它由三个分支组成,即空间注意、通道注意和卷积模块。为了提高性能,我们整合了一个交互式模块,以促进三个分支之间共享判别特征。其次,我们探索了一种基于注意力的标签增强(ALE)方法来学习标签分布。ALE 采用自我注意机制来捕捉类之间的相关性,从而为分类器训练推断标签分布。由于自我注意可以模拟类之间的长程依赖关系,因此得到的标签分布可以为训练静脉分类器提供增强的监督信息。最后,我们将 ALE 与 IVT 结合起来,创建了 ALE-IVT,以端到端的方式进行训练,从而提高 IVT 分类器的识别准确率。我们在三个公共数据集上的实验表明,我们的 IVT 模型超越了现有的最先进的静脉分类器。此外,就识别准确率而言,ALE 优于当前的标签增强方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.90
自引率
0.00%
发文量
0
期刊最新文献
2024 Index IEEE Transactions on Biometrics, Behavior, and Identity Science Vol. 6 Table of Contents IEEE T-BIOM Editorial Board Changes IEEE Transactions on Biometrics, Behavior, and Identity Science Cutting-Edge Biometrics Research: Selected Best Papers From IJCB 2023
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1