Hanlin Xu , Gang Liu , Yao Qian , Xiangbo Zhang , Durga Prasad Bavirisetti
{"title":"ALFusion: Adaptive fusion for infrared and visible images under complex lighting conditions","authors":"Hanlin Xu , Gang Liu , Yao Qian , Xiangbo Zhang , Durga Prasad Bavirisetti","doi":"10.1016/j.dsp.2024.104864","DOIUrl":null,"url":null,"abstract":"<div><div>In the task of infrared and visible image fusion, source images often exhibit complex and variable characteristics due to scene illumination. To address the challenges posed by complex lighting conditions and enhance the quality of fused images, we develop the ALFusion method, which combines dynamic convolution and Transformer for infrared and visible image fusion. The core idea is to utilize the adaptability of dynamic convolution coupled with the superior long-range modeling capabilities of the Transformer to design a hybrid feature extractor. This extractor dynamically and comprehensively captures features from source images under various lighting conditions. To enhance the effectiveness of mixed features, we integrate an elaborate multi-scale attention enhancement module into the skip connections of the U-Net architecture. In this module, we use convolutional kernels of various sizes to expand the receptive field and incorporate an attention mechanism to enhance and highlight the combined use of dynamic convolution and the Transformer. Considering the performance in advanced visual tasks, an illumination detection network is integrated into the loss function, strategically balancing the pixel fusion ratio and optimizing visual impacts under varied lighting conditions, leveraging information from different source images. The fusion results indicate that our method consistently delivers superior background contrast and enhanced texture and structural features across diverse lighting conditions. Ablation and complementary experiments have been conducted, further affirming the effectiveness of our proposed method and highlighting its potential in advanced visual tasks.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104864"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200424004883","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In the task of infrared and visible image fusion, source images often exhibit complex and variable characteristics due to scene illumination. To address the challenges posed by complex lighting conditions and enhance the quality of fused images, we develop the ALFusion method, which combines dynamic convolution and Transformer for infrared and visible image fusion. The core idea is to utilize the adaptability of dynamic convolution coupled with the superior long-range modeling capabilities of the Transformer to design a hybrid feature extractor. This extractor dynamically and comprehensively captures features from source images under various lighting conditions. To enhance the effectiveness of mixed features, we integrate an elaborate multi-scale attention enhancement module into the skip connections of the U-Net architecture. In this module, we use convolutional kernels of various sizes to expand the receptive field and incorporate an attention mechanism to enhance and highlight the combined use of dynamic convolution and the Transformer. Considering the performance in advanced visual tasks, an illumination detection network is integrated into the loss function, strategically balancing the pixel fusion ratio and optimizing visual impacts under varied lighting conditions, leveraging information from different source images. The fusion results indicate that our method consistently delivers superior background contrast and enhanced texture and structural features across diverse lighting conditions. Ablation and complementary experiments have been conducted, further affirming the effectiveness of our proposed method and highlighting its potential in advanced visual tasks.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,