{"title":"HF2TNet:用于红外和可见光图像融合的分层融合两级训练网络","authors":"Ting Lv;Chuanming Ji;Hong Jiang;Yu Liu","doi":"10.1109/LSP.2024.3486113","DOIUrl":null,"url":null,"abstract":"In the field of infrared and visible image fusion, current algorithms often focus on complex feature extraction and sophisticated fusion mechanisms, ignoring the issues of information redundancy and feature imbalance. These limit effective information aggregation. To address these issues, this paper proposes a hierarchical fusion strategy with a two-stage training network, abbreviated as HF2TNet, which achieves effective information aggregation in a staged manner. In the initial training stage, a three-stream encoder-decoder architecture is proposed, seamlessly integrating CNN and transformer modules. This architecture extracts both global and local features from visible and infrared images, capturing their shared attributes before the fusion process. Moreover, a multi-shared attention module (MSAM) is proposed to profoundly reconstruct and augment the visible and infrared features, ensuring the preservation and enhancement of details across modalities. In the subsequent stage, HF2TNet utilizes the pre-integrated features as query inputs for the dual MSAMs. These modules interact with the previously reconstructed infrared and visible features to enhance complementary information and ensure a balanced feature fusion. Experimental results indicate HF2TNet's superior performance on standard datasets like MSRS and TNO, especially in complex scenes, demonstrating its potential in multimodal image fusion.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"31 ","pages":"3164-3168"},"PeriodicalIF":3.2000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HF2TNet: A Hierarchical Fusion Two-Stage Training Network for Infrared and Visible Image Fusion\",\"authors\":\"Ting Lv;Chuanming Ji;Hong Jiang;Yu Liu\",\"doi\":\"10.1109/LSP.2024.3486113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of infrared and visible image fusion, current algorithms often focus on complex feature extraction and sophisticated fusion mechanisms, ignoring the issues of information redundancy and feature imbalance. These limit effective information aggregation. To address these issues, this paper proposes a hierarchical fusion strategy with a two-stage training network, abbreviated as HF2TNet, which achieves effective information aggregation in a staged manner. In the initial training stage, a three-stream encoder-decoder architecture is proposed, seamlessly integrating CNN and transformer modules. This architecture extracts both global and local features from visible and infrared images, capturing their shared attributes before the fusion process. Moreover, a multi-shared attention module (MSAM) is proposed to profoundly reconstruct and augment the visible and infrared features, ensuring the preservation and enhancement of details across modalities. In the subsequent stage, HF2TNet utilizes the pre-integrated features as query inputs for the dual MSAMs. These modules interact with the previously reconstructed infrared and visible features to enhance complementary information and ensure a balanced feature fusion. Experimental results indicate HF2TNet's superior performance on standard datasets like MSRS and TNO, especially in complex scenes, demonstrating its potential in multimodal image fusion.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"31 \",\"pages\":\"3164-3168\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10734176/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10734176/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
HF2TNet: A Hierarchical Fusion Two-Stage Training Network for Infrared and Visible Image Fusion
In the field of infrared and visible image fusion, current algorithms often focus on complex feature extraction and sophisticated fusion mechanisms, ignoring the issues of information redundancy and feature imbalance. These limit effective information aggregation. To address these issues, this paper proposes a hierarchical fusion strategy with a two-stage training network, abbreviated as HF2TNet, which achieves effective information aggregation in a staged manner. In the initial training stage, a three-stream encoder-decoder architecture is proposed, seamlessly integrating CNN and transformer modules. This architecture extracts both global and local features from visible and infrared images, capturing their shared attributes before the fusion process. Moreover, a multi-shared attention module (MSAM) is proposed to profoundly reconstruct and augment the visible and infrared features, ensuring the preservation and enhancement of details across modalities. In the subsequent stage, HF2TNet utilizes the pre-integrated features as query inputs for the dual MSAMs. These modules interact with the previously reconstructed infrared and visible features to enhance complementary information and ensure a balanced feature fusion. Experimental results indicate HF2TNet's superior performance on standard datasets like MSRS and TNO, especially in complex scenes, demonstrating its potential in multimodal image fusion.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.