Infrared and visible image fusion (IVIF) is a highly important research field. Through algorithmic fusion, the resulting image contains both the rich texture details present in visible images and the unique thermal information from infrared images. Although numerous approaches have been explored and significant progress has been made, extracting effective modality-specific features and developing robust fusion strategies and rules remain major challenges in IVIF. To address this issue, we propose a novel Hierarchical Feature Information Exchange Network (HFIENet), which comprises two main components: the Information Exchange (IE) module and the Selective Feature Fusion (SFF) module. The IE module employs the cross-attention strategy and differential weighting operation to perform information exchange, thereby enabling the network to extract more significant and comprehensive features from each modality. Due to the different importance of shallow and deep features, the SFF module adaptively integrates essential features for image reconstruction by leveraging attention mechanisms across both channel and spatial dimensions. Extensive experiments conducted on four publicly available datasets demonstrate that HFIENet consistently outperforms current state-of-the-art methods in both qualitative visual analysis and quantitative metric evaluations. Furthermore, under the same experimental settings, it also improves the performance of downstream semantic segmentation and object detection tasks. Our code and pre-trained model will be available at https://github.com/vonnovx/HFIENet.
扫码关注我们
求助内容:
应助结果提醒方式:
