Infrared and visible image fusion via dual encoder based on dense connection

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pattern Recognition Pub Date : 2025-02-17 DOI:10.1016/j.patcog.2025.111476
Quan Lu, Hongbin Zhang, Linfei Yin
{"title":"Infrared and visible image fusion via dual encoder based on dense connection","authors":"Quan Lu,&nbsp;Hongbin Zhang,&nbsp;Linfei Yin","doi":"10.1016/j.patcog.2025.111476","DOIUrl":null,"url":null,"abstract":"<div><div>Aiming at the problems of information loss and edge blurring due to the loss of gradient features that tend to occur during the fusion of infrared and visible images, this study proposes a dual encoder image fusion method (DEFusion) based on dense connectivity. The proposed method processes infrared and visible images by different means, therefore guaranteeing the best possible preservation of the features of the original image. A new progressive fusion strategy is constructed to ensure that the network is better able to capture the detailed information present in visible images while minimizing the gradient loss of the infrared image. Furthermore, a novel loss function that includes gradient loss and content loss, which ensures that the fusion results consider both the detailed information and gradient of the source image, is proposed in this study to facilitate the fusion process. The experimental results with the state-of-art methods on TNO and RoadScene datasets verify that the proposed method exhibits superior performance in most indices. The fused image exhibits excellent subjective contrast and clarity, providing a strong visual perception. The results of the comparison experiment demonstrate that this method exhibits favorable characteristics in terms of generalization and robustness.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"163 ","pages":"Article 111476"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325001360","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Aiming at the problems of information loss and edge blurring due to the loss of gradient features that tend to occur during the fusion of infrared and visible images, this study proposes a dual encoder image fusion method (DEFusion) based on dense connectivity. The proposed method processes infrared and visible images by different means, therefore guaranteeing the best possible preservation of the features of the original image. A new progressive fusion strategy is constructed to ensure that the network is better able to capture the detailed information present in visible images while minimizing the gradient loss of the infrared image. Furthermore, a novel loss function that includes gradient loss and content loss, which ensures that the fusion results consider both the detailed information and gradient of the source image, is proposed in this study to facilitate the fusion process. The experimental results with the state-of-art methods on TNO and RoadScene datasets verify that the proposed method exhibits superior performance in most indices. The fused image exhibits excellent subjective contrast and clarity, providing a strong visual perception. The results of the comparison experiment demonstrate that this method exhibits favorable characteristics in terms of generalization and robustness.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
针对红外图像和可见光图像融合过程中容易出现的梯度特征丢失导致的信息丢失和边缘模糊问题,本研究提出了一种基于密集连接的双编码器图像融合方法(DEFusion)。该方法采用不同的方法处理红外图像和可见光图像,因此能最大限度地保留原始图像的特征。该方法构建了一种新的渐进式融合策略,以确保网络能够更好地捕捉可见光图像中的详细信息,同时最大限度地减少红外图像的梯度损失。此外,本研究还提出了一种包含梯度损失和内容损失的新型损失函数,以确保融合结果同时考虑源图像的详细信息和梯度,从而促进融合过程。在 TNO 和 RoadScene 数据集上与最先进方法的实验结果验证了所提出的方法在大多数指标上都表现出了卓越的性能。融合后的图像具有极佳的主观对比度和清晰度,提供了强烈的视觉感受。对比实验的结果表明,该方法在通用性和鲁棒性方面表现出良好的特性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
期刊最新文献
Editorial Board A robust transductive distribution calibration method for few-shot learning Robust shortcut and disordered robustness: Improving adversarial training through adaptive smoothing Embedded multi-label feature selection via orthogonal regression Texture and noise dual adaptation for infrared image super-resolution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1