Attention-modulated frequency-aware pooling via spatial guidance

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2025-04-07 Epub Date: 2025-01-27 DOI:10.1016/j.neucom.2025.129507
Yunzhong Si , Huiying Xu , Xinzhong Zhu , Rihao Liu , Hongbo Li
{"title":"Attention-modulated frequency-aware pooling via spatial guidance","authors":"Yunzhong Si ,&nbsp;Huiying Xu ,&nbsp;Xinzhong Zhu ,&nbsp;Rihao Liu ,&nbsp;Hongbo Li","doi":"10.1016/j.neucom.2025.129507","DOIUrl":null,"url":null,"abstract":"<div><div>Pooling is widely used in computer vision to expand the receptive field and enhance semantic understanding by reducing spatial resolution. However, current mainstream downsampling methods primarily rely on local spatial aggregation. While they effectively reduce the spatial resolution of feature maps and extract discriminative features, they are still limited by the constraints of the receptive field and the inadequacy of single-domain information, making it challenging to effectively capture fine details while suppressing noise. To address these limitations, we propose a Dual-Domain Downsampling (D3) method, which leverages the complementarity of spatial and frequency domains. We employ an invertible local two-dimensional Discrete Cosine Transform (2D DCT) transformation to construct a frequency domain pooling window. In the spatial domain, we design an Inverted Multiform Attention Modulator (IMAM) that expands the receptive field through multiform convolutions, while adaptively constructing dynamic frequency weights guided by rich spatial information. This allows for fine-grained modulation of different frequency components, either amplifying or attenuating them in different spatial regions, effectively reducing noise while preserving detail. Extensive experiments on ImageNet-1K, MSCOCO, and complex scene detection datasets across various benchmark models consistently validate the effectiveness of our approach. On the ImageNet-1K classification task, our method achieve up to a 1.95% accuracy improvement, with significant performance gains over state-of-the-art methods on MSCOCO and other challenging detection scenarios. The code will be made publicly available at: <span><span>https://github.com/HZAI-ZJNU/D3</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"625 ","pages":"Article 129507"},"PeriodicalIF":6.5000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225001791","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Pooling is widely used in computer vision to expand the receptive field and enhance semantic understanding by reducing spatial resolution. However, current mainstream downsampling methods primarily rely on local spatial aggregation. While they effectively reduce the spatial resolution of feature maps and extract discriminative features, they are still limited by the constraints of the receptive field and the inadequacy of single-domain information, making it challenging to effectively capture fine details while suppressing noise. To address these limitations, we propose a Dual-Domain Downsampling (D3) method, which leverages the complementarity of spatial and frequency domains. We employ an invertible local two-dimensional Discrete Cosine Transform (2D DCT) transformation to construct a frequency domain pooling window. In the spatial domain, we design an Inverted Multiform Attention Modulator (IMAM) that expands the receptive field through multiform convolutions, while adaptively constructing dynamic frequency weights guided by rich spatial information. This allows for fine-grained modulation of different frequency components, either amplifying or attenuating them in different spatial regions, effectively reducing noise while preserving detail. Extensive experiments on ImageNet-1K, MSCOCO, and complex scene detection datasets across various benchmark models consistently validate the effectiveness of our approach. On the ImageNet-1K classification task, our method achieve up to a 1.95% accuracy improvement, with significant performance gains over state-of-the-art methods on MSCOCO and other challenging detection scenarios. The code will be made publicly available at: https://github.com/HZAI-ZJNU/D3.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过空间引导的注意调制频率感知池
池化被广泛应用于计算机视觉中,通过降低空间分辨率来扩展接收域和增强语义理解。然而,目前主流的降采样方法主要依赖于局部空间聚集。虽然它们有效地降低了特征图的空间分辨率并提取了判别特征,但它们仍然受到接受野的约束和单域信息的不足的限制,使得在抑制噪声的同时有效地捕获细节变得困难。为了解决这些限制,我们提出了一种双域下采样(D3)方法,该方法利用了空间域和频域的互补性。我们采用可逆的局部二维离散余弦变换(2D DCT)来构造频域池化窗口。在空间域,我们设计了一种倒置多形式注意调制器(IMAM),通过多形式卷积扩展感受域,同时在丰富的空间信息引导下自适应构建动态频率权值。这允许对不同频率成分进行细粒度调制,在不同的空间区域放大或衰减它们,有效地降低噪声,同时保留细节。在ImageNet-1K、MSCOCO和各种基准模型的复杂场景检测数据集上进行的大量实验一致地验证了我们方法的有效性。在ImageNet-1K分类任务上,我们的方法实现了高达1.95%的准确率提高,在MSCOCO和其他具有挑战性的检测场景下,与最先进的方法相比,性能有了显著提高。该代码将在https://github.com/HZAI-ZJNU/D3上公开发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
RAGCA: Relation-guided attention and graph context awareness framework for multimodal knowledge graph completion Robust dual-key implicit neural representation for multi-image steganography PACT: Phase-amplitude collaboration transformer for lightweight image super-resolution CodeBC: A more secure large language model for smart contract code generation in blockchain Bridging the gaps: Utilizing unlabeled face recognition datasets to boost semi-supervised facial expression recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1