Zhu Chen, Fan Li, Yueqin Diao, Wanlong Zhao, Puyin Fan
{"title":"知识嵌入式多层协作自适应融合网络:应对多雾条件和复杂成像的挑战","authors":"Zhu Chen, Fan Li, Yueqin Diao, Wanlong Zhao, Puyin Fan","doi":"10.1016/j.jksuci.2024.102230","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion aims at generating high-quality images that serve both human and machine visual perception under extreme imaging conditions. However, current fusion methods primarily rely on datasets comprising infrared and visible images captured under clear weather conditions. When applied to real-world scenarios, image fusion tasks inevitably encounter challenges posed by adverse weather conditions such as heavy fog, resulting in difficulties in obtaining effective information and inferior visual perception. To address these challenges, this paper proposes a Mean Teacher-based Self-supervised Image Restoration and multimodal Image Fusion joint learning network (SIRIFN), which enhances the robustness of the fusion network in adverse weather conditions by employing deep supervision from a guiding network to the learning network. Furthermore, to enhance the network’s information extraction and integration capabilities, our Multi-level Feature Collaborative adaptive Reconstruction Network (MFCRNet) is introduced, which adopts a multi-branch, multi-scale design, with differentiated processing strategies for different features. This approach preserves rich texture information while maintaining semantic consistency from the source images. Extensive experiments demonstrate that SIRIFN outperforms current state-of-the-art algorithms in both visual quality and quantitative evaluation. Specifically, the joint implementation of image restoration and multimodal fusion provides more effective information for visual tasks under extreme weather conditions, thereby facilitating downstream visual tasks.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102230"},"PeriodicalIF":5.2000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Knowledge-embedded multi-layer collaborative adaptive fusion network: Addressing challenges in foggy conditions and complex imaging\",\"authors\":\"Zhu Chen, Fan Li, Yueqin Diao, Wanlong Zhao, Puyin Fan\",\"doi\":\"10.1016/j.jksuci.2024.102230\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Infrared and visible image fusion aims at generating high-quality images that serve both human and machine visual perception under extreme imaging conditions. However, current fusion methods primarily rely on datasets comprising infrared and visible images captured under clear weather conditions. When applied to real-world scenarios, image fusion tasks inevitably encounter challenges posed by adverse weather conditions such as heavy fog, resulting in difficulties in obtaining effective information and inferior visual perception. To address these challenges, this paper proposes a Mean Teacher-based Self-supervised Image Restoration and multimodal Image Fusion joint learning network (SIRIFN), which enhances the robustness of the fusion network in adverse weather conditions by employing deep supervision from a guiding network to the learning network. Furthermore, to enhance the network’s information extraction and integration capabilities, our Multi-level Feature Collaborative adaptive Reconstruction Network (MFCRNet) is introduced, which adopts a multi-branch, multi-scale design, with differentiated processing strategies for different features. This approach preserves rich texture information while maintaining semantic consistency from the source images. Extensive experiments demonstrate that SIRIFN outperforms current state-of-the-art algorithms in both visual quality and quantitative evaluation. Specifically, the joint implementation of image restoration and multimodal fusion provides more effective information for visual tasks under extreme weather conditions, thereby facilitating downstream visual tasks.</div></div>\",\"PeriodicalId\":48547,\"journal\":{\"name\":\"Journal of King Saud University-Computer and Information Sciences\",\"volume\":\"36 10\",\"pages\":\"Article 102230\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of King Saud University-Computer and Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1319157824003197\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of King Saud University-Computer and Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1319157824003197","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Knowledge-embedded multi-layer collaborative adaptive fusion network: Addressing challenges in foggy conditions and complex imaging
Infrared and visible image fusion aims at generating high-quality images that serve both human and machine visual perception under extreme imaging conditions. However, current fusion methods primarily rely on datasets comprising infrared and visible images captured under clear weather conditions. When applied to real-world scenarios, image fusion tasks inevitably encounter challenges posed by adverse weather conditions such as heavy fog, resulting in difficulties in obtaining effective information and inferior visual perception. To address these challenges, this paper proposes a Mean Teacher-based Self-supervised Image Restoration and multimodal Image Fusion joint learning network (SIRIFN), which enhances the robustness of the fusion network in adverse weather conditions by employing deep supervision from a guiding network to the learning network. Furthermore, to enhance the network’s information extraction and integration capabilities, our Multi-level Feature Collaborative adaptive Reconstruction Network (MFCRNet) is introduced, which adopts a multi-branch, multi-scale design, with differentiated processing strategies for different features. This approach preserves rich texture information while maintaining semantic consistency from the source images. Extensive experiments demonstrate that SIRIFN outperforms current state-of-the-art algorithms in both visual quality and quantitative evaluation. Specifically, the joint implementation of image restoration and multimodal fusion provides more effective information for visual tasks under extreme weather conditions, thereby facilitating downstream visual tasks.
期刊介绍:
In 2022 the Journal of King Saud University - Computer and Information Sciences will become an author paid open access journal. Authors who submit their manuscript after October 31st 2021 will be asked to pay an Article Processing Charge (APC) after acceptance of their paper to make their work immediately, permanently, and freely accessible to all. The Journal of King Saud University Computer and Information Sciences is a refereed, international journal that covers all aspects of both foundations of computer and its practical applications.