Yunzhong Si , Huiying Xu , Xinzhong Zhu , Rihao Liu , Hongbo Li
{"title":"Attention-modulated frequency-aware pooling via spatial guidance","authors":"Yunzhong Si , Huiying Xu , Xinzhong Zhu , Rihao Liu , Hongbo Li","doi":"10.1016/j.neucom.2025.129507","DOIUrl":null,"url":null,"abstract":"<div><div>Pooling is widely used in computer vision to expand the receptive field and enhance semantic understanding by reducing spatial resolution. However, current mainstream downsampling methods primarily rely on local spatial aggregation. While they effectively reduce the spatial resolution of feature maps and extract discriminative features, they are still limited by the constraints of the receptive field and the inadequacy of single-domain information, making it challenging to effectively capture fine details while suppressing noise. To address these limitations, we propose a Dual-Domain Downsampling (D3) method, which leverages the complementarity of spatial and frequency domains. We employ an invertible local two-dimensional Discrete Cosine Transform (2D DCT) transformation to construct a frequency domain pooling window. In the spatial domain, we design an Inverted Multiform Attention Modulator (IMAM) that expands the receptive field through multiform convolutions, while adaptively constructing dynamic frequency weights guided by rich spatial information. This allows for fine-grained modulation of different frequency components, either amplifying or attenuating them in different spatial regions, effectively reducing noise while preserving detail. Extensive experiments on ImageNet-1K, MSCOCO, and complex scene detection datasets across various benchmark models consistently validate the effectiveness of our approach. On the ImageNet-1K classification task, our method achieve up to a 1.95% accuracy improvement, with significant performance gains over state-of-the-art methods on MSCOCO and other challenging detection scenarios. The code will be made publicly available at: <span><span>https://github.com/HZAI-ZJNU/D3</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"625 ","pages":"Article 129507"},"PeriodicalIF":5.5000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225001791","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Pooling is widely used in computer vision to expand the receptive field and enhance semantic understanding by reducing spatial resolution. However, current mainstream downsampling methods primarily rely on local spatial aggregation. While they effectively reduce the spatial resolution of feature maps and extract discriminative features, they are still limited by the constraints of the receptive field and the inadequacy of single-domain information, making it challenging to effectively capture fine details while suppressing noise. To address these limitations, we propose a Dual-Domain Downsampling (D3) method, which leverages the complementarity of spatial and frequency domains. We employ an invertible local two-dimensional Discrete Cosine Transform (2D DCT) transformation to construct a frequency domain pooling window. In the spatial domain, we design an Inverted Multiform Attention Modulator (IMAM) that expands the receptive field through multiform convolutions, while adaptively constructing dynamic frequency weights guided by rich spatial information. This allows for fine-grained modulation of different frequency components, either amplifying or attenuating them in different spatial regions, effectively reducing noise while preserving detail. Extensive experiments on ImageNet-1K, MSCOCO, and complex scene detection datasets across various benchmark models consistently validate the effectiveness of our approach. On the ImageNet-1K classification task, our method achieve up to a 1.95% accuracy improvement, with significant performance gains over state-of-the-art methods on MSCOCO and other challenging detection scenarios. The code will be made publicly available at: https://github.com/HZAI-ZJNU/D3.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.