Mingxin Yu , Xufan Miao , Yichen Sun , Yuchen Bai , Lianqing Zhu
{"title":"AFIRE:用于在异构成像环境中提取适应光照的特征的自适应 FusionNet","authors":"Mingxin Yu , Xufan Miao , Yichen Sun , Yuchen Bai , Lianqing Zhu","doi":"10.1016/j.infrared.2024.105557","DOIUrl":null,"url":null,"abstract":"<div><p>The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: <span><span>https://www.github.com/ISCLab-Bistu/AFIRE</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AFIRE: Adaptive FusionNet for illumination-robust feature extraction in heterogeneous imaging environments\",\"authors\":\"Mingxin Yu , Xufan Miao , Yichen Sun , Yuchen Bai , Lianqing Zhu\",\"doi\":\"10.1016/j.infrared.2024.105557\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: <span><span>https://www.github.com/ISCLab-Bistu/AFIRE</span><svg><path></path></svg></span>.</p></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449524004419\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524004419","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
AFIRE: Adaptive FusionNet for illumination-robust feature extraction in heterogeneous imaging environments
The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: https://www.github.com/ISCLab-Bistu/AFIRE.
期刊介绍:
The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region.
Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine.
Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.