DoA-ViT: Dual-objective Affine Vision Transformer for Data Insufficiency

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2024-11-17 DOI:10.1016/j.neucom.2024.128896
Qiang Ren, Junli Wang
{"title":"DoA-ViT: Dual-objective Affine Vision Transformer for Data Insufficiency","authors":"Qiang Ren,&nbsp;Junli Wang","doi":"10.1016/j.neucom.2024.128896","DOIUrl":null,"url":null,"abstract":"<div><div>Vision Transformers (ViTs) excel in large-scale image recognition tasks but struggle with limited data due to ineffective patch-level local information utilization. Existing methods focus on enhancing local representations at the model level but often treat all features equally, leading to noise from irrelevant information. Effectively distinguishing between discriminative features and irrelevant information helps minimize the interference of noise at the model level. To tackle this, we introduce Dual-objective Affine Vision Transformer (DoA-ViT), which enhances ViTs for data-limited tasks by improving feature discrimination. DoA-ViT incorporates a learnable affine transformation that associates transformed features with class-specific ones while preserving their intrinsic features. Additionally, an adaptive patch-based enhancement mechanism is designed to assign importance scores to patches, minimizing the impact of irrelevant information. These enhancements can be seamlessly integrated into existing ViTs as plug-and-play components. Extensive experiments on small-scale datasets show that DoA-ViT consistently outperforms existing methods, with visualization results highlighting its ability to identify critical image regions effectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"615 ","pages":"Article 128896"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224016679","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Vision Transformers (ViTs) excel in large-scale image recognition tasks but struggle with limited data due to ineffective patch-level local information utilization. Existing methods focus on enhancing local representations at the model level but often treat all features equally, leading to noise from irrelevant information. Effectively distinguishing between discriminative features and irrelevant information helps minimize the interference of noise at the model level. To tackle this, we introduce Dual-objective Affine Vision Transformer (DoA-ViT), which enhances ViTs for data-limited tasks by improving feature discrimination. DoA-ViT incorporates a learnable affine transformation that associates transformed features with class-specific ones while preserving their intrinsic features. Additionally, an adaptive patch-based enhancement mechanism is designed to assign importance scores to patches, minimizing the impact of irrelevant information. These enhancements can be seamlessly integrated into existing ViTs as plug-and-play components. Extensive experiments on small-scale datasets show that DoA-ViT consistently outperforms existing methods, with visualization results highlighting its ability to identify critical image regions effectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DoA-ViT:针对数据不足的双目标仿射视觉变换器
视觉变换器(ViTs)在大规模图像识别任务中表现出色,但在有限的数据中却因片段级局部信息利用不力而举步维艰。现有方法侧重于增强模型级的局部表示,但往往对所有特征一视同仁,从而导致无关信息产生噪音。有效区分区分性特征和无关信息有助于最大限度地减少模型级噪声的干扰。为了解决这个问题,我们引入了双目标仿射视觉转换器(DoA-ViT),通过提高特征识别能力来增强数据有限任务的 ViT。DoA-ViT 融合了可学习的仿射变换,可将变换后的特征与特定类别的特征关联起来,同时保留其内在特征。此外,还设计了一种基于补丁的自适应增强机制,为补丁分配重要性分数,最大限度地减少无关信息的影响。这些增强功能可作为即插即用组件无缝集成到现有的 ViT 中。在小规模数据集上进行的广泛实验表明,DoA-ViT 的性能始终优于现有方法,其可视化结果凸显了其有效识别关键图像区域的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
Monocular thermal SLAM with neural radiance fields for 3D scene reconstruction Learning a more compact representation for low-rank tensor completion An HVS-derived network for assessing the quality of camouflaged targets with feature fusion Global Span Semantic Dependency Awareness and Filtering Network for nested named entity recognition A user behavior-aware multi-task learning model for enhanced short video recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1