ALFusion: Adaptive fusion for infrared and visible images under complex lighting conditions

IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Digital Signal Processing Pub Date : 2024-11-17 DOI:10.1016/j.dsp.2024.104864
Hanlin Xu , Gang Liu , Yao Qian , Xiangbo Zhang , Durga Prasad Bavirisetti
{"title":"ALFusion: Adaptive fusion for infrared and visible images under complex lighting conditions","authors":"Hanlin Xu ,&nbsp;Gang Liu ,&nbsp;Yao Qian ,&nbsp;Xiangbo Zhang ,&nbsp;Durga Prasad Bavirisetti","doi":"10.1016/j.dsp.2024.104864","DOIUrl":null,"url":null,"abstract":"<div><div>In the task of infrared and visible image fusion, source images often exhibit complex and variable characteristics due to scene illumination. To address the challenges posed by complex lighting conditions and enhance the quality of fused images, we develop the ALFusion method, which combines dynamic convolution and Transformer for infrared and visible image fusion. The core idea is to utilize the adaptability of dynamic convolution coupled with the superior long-range modeling capabilities of the Transformer to design a hybrid feature extractor. This extractor dynamically and comprehensively captures features from source images under various lighting conditions. To enhance the effectiveness of mixed features, we integrate an elaborate multi-scale attention enhancement module into the skip connections of the U-Net architecture. In this module, we use convolutional kernels of various sizes to expand the receptive field and incorporate an attention mechanism to enhance and highlight the combined use of dynamic convolution and the Transformer. Considering the performance in advanced visual tasks, an illumination detection network is integrated into the loss function, strategically balancing the pixel fusion ratio and optimizing visual impacts under varied lighting conditions, leveraging information from different source images. The fusion results indicate that our method consistently delivers superior background contrast and enhanced texture and structural features across diverse lighting conditions. Ablation and complementary experiments have been conducted, further affirming the effectiveness of our proposed method and highlighting its potential in advanced visual tasks.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104864"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200424004883","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

In the task of infrared and visible image fusion, source images often exhibit complex and variable characteristics due to scene illumination. To address the challenges posed by complex lighting conditions and enhance the quality of fused images, we develop the ALFusion method, which combines dynamic convolution and Transformer for infrared and visible image fusion. The core idea is to utilize the adaptability of dynamic convolution coupled with the superior long-range modeling capabilities of the Transformer to design a hybrid feature extractor. This extractor dynamically and comprehensively captures features from source images under various lighting conditions. To enhance the effectiveness of mixed features, we integrate an elaborate multi-scale attention enhancement module into the skip connections of the U-Net architecture. In this module, we use convolutional kernels of various sizes to expand the receptive field and incorporate an attention mechanism to enhance and highlight the combined use of dynamic convolution and the Transformer. Considering the performance in advanced visual tasks, an illumination detection network is integrated into the loss function, strategically balancing the pixel fusion ratio and optimizing visual impacts under varied lighting conditions, leveraging information from different source images. The fusion results indicate that our method consistently delivers superior background contrast and enhanced texture and structural features across diverse lighting conditions. Ablation and complementary experiments have been conducted, further affirming the effectiveness of our proposed method and highlighting its potential in advanced visual tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ALFusion:复杂照明条件下红外和可见光图像的自适应融合
在红外图像和可见光图像融合任务中,源图像通常会因场景光照而呈现出复杂多变的特征。为了应对复杂光照条件带来的挑战并提高融合图像的质量,我们开发了 ALFusion 方法,该方法结合了动态卷积和变换器,用于红外和可见光图像融合。其核心思想是利用动态卷积的适应性和变换器卓越的远距离建模能力来设计一种混合特征提取器。该提取器可在各种光照条件下动态、全面地捕捉源图像的特征。为了提高混合特征的有效性,我们在 U-Net 架构的跳转连接中集成了一个精心设计的多尺度注意力增强模块。在该模块中,我们使用不同大小的卷积核来扩展感受野,并结合注意力机制来增强和突出动态卷积和变换器的结合使用。考虑到高级视觉任务中的性能,我们在损失函数中集成了光照检测网络,战略性地平衡了像素融合率,并利用不同源图像的信息,优化了不同光照条件下的视觉效果。融合结果表明,我们的方法能在不同的照明条件下持续提供出色的背景对比度,并增强纹理和结构特征。我们还进行了消融和互补实验,进一步证实了我们提出的方法的有效性,并凸显了其在高级视觉任务中的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Digital Signal Processing
Digital Signal Processing 工程技术-工程:电子与电气
CiteScore
5.30
自引率
17.20%
发文量
435
审稿时长
66 days
期刊介绍: Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal. The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as: • big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,
期刊最新文献
Editorial Board Editorial Board Research on ZYNQ neural network acceleration method for aluminum surface microdefects Cross-scale informative priors network for medical image segmentation An improved digital predistortion scheme for nonlinear transmitters with limited bandwidth
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1