Task-Oriented Channel Attention for Fine-Grained Few-Shot Classification

SuBeen Lee;WonJun Moon;Hyun Seok Seong;Jae-Pil Heo
{"title":"Task-Oriented Channel Attention for Fine-Grained Few-Shot Classification","authors":"SuBeen Lee;WonJun Moon;Hyun Seok Seong;Jae-Pil Heo","doi":"10.1109/TPAMI.2024.3504537","DOIUrl":null,"url":null,"abstract":"The difficulty of fine-grained image classification mainly comes from a shared overall appearance across classes. Thus, recognizing discriminative details, such as the eyes and beaks of birds, is a key to the task. However, this is particularly challenging when training data is limited. To address this, we propose Task Discrepancy Maximization (TDM), a task-oriented channel attention method tailored for fine-grained few-shot classification with two novel modules Support Attention Module (SAM) and Query Attention Module (QAM). SAM highlights channels encoding class-wise discriminative features, while QAM assigns higher weights to object-relevant channels of the query. Based on these submodules, TDM produces task-adaptive features by focusing on channels encoding class-discriminative details and possessed by the query at the same time, for accurate class-sensitive similarity measure between support and query instances. While TDM influences high-level feature maps by task-adaptive calibration of channel-wise importance, we further introduce Instance Attention Module (IAM) operating in intermediate layers of feature extractors to instance-wisely highlight object-relevant channels, by extending QAM. The merits of TDM and IAM and their complementary benefits are experimentally validated in fine-grained few-shot classification tasks. Moreover, IAM is also effective in coarse-grained and cross-domain few-shot classifications.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 3","pages":"1448-1463"},"PeriodicalIF":18.6000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10763467/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The difficulty of fine-grained image classification mainly comes from a shared overall appearance across classes. Thus, recognizing discriminative details, such as the eyes and beaks of birds, is a key to the task. However, this is particularly challenging when training data is limited. To address this, we propose Task Discrepancy Maximization (TDM), a task-oriented channel attention method tailored for fine-grained few-shot classification with two novel modules Support Attention Module (SAM) and Query Attention Module (QAM). SAM highlights channels encoding class-wise discriminative features, while QAM assigns higher weights to object-relevant channels of the query. Based on these submodules, TDM produces task-adaptive features by focusing on channels encoding class-discriminative details and possessed by the query at the same time, for accurate class-sensitive similarity measure between support and query instances. While TDM influences high-level feature maps by task-adaptive calibration of channel-wise importance, we further introduce Instance Attention Module (IAM) operating in intermediate layers of feature extractors to instance-wisely highlight object-relevant channels, by extending QAM. The merits of TDM and IAM and their complementary benefits are experimentally validated in fine-grained few-shot classification tasks. Moreover, IAM is also effective in coarse-grained and cross-domain few-shot classifications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
以任务为导向的通道关注,用于细粒度微小镜头分类
细粒度图像分类的困难主要来自于类之间共享的整体外观。因此,识别有区别的细节,如鸟类的眼睛和喙,是这项任务的关键。然而,当训练数据有限时,这尤其具有挑战性。为了解决这个问题,我们提出了任务差异最大化(TDM),这是一种面向任务的通道注意方法,该方法针对细粒度的少镜头分类进行了定制,具有两个新颖的模块支持注意模块(SAM)和查询注意模块(QAM)。SAM突出显示编码类区分特征的通道,而QAM为查询的对象相关通道分配更高的权重。在这些子模块的基础上,TDM通过关注编码类区别细节的通道同时产生查询所拥有的任务自适应特征,以准确地度量支持和查询实例之间的类敏感相似度。虽然TDM通过通道重要性的任务自适应校准来影响高级特征映射,但我们进一步引入了在特征提取器的中间层中运行的实例注意模块(IAM),通过扩展QAM,以实例明智地突出对象相关的通道。在细粒度的小样本分类任务中,实验验证了TDM和IAM的优点及其互补优势。此外,IAM在粗粒度和跨域的少量分类中也很有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Examining the Impact of Optical Aberrations to Image Classification and Object Detection Models. Neural Eigenfunctions are Structured Representation Learners. On the Adversarial Transferability of Generalized "Skip Connections". Distribution-to-Points Matching for Image Text Retrieval. Penny-Wise and Pound-Foolish in AI-Generated Image Detection.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1