MSL-CCRN:基于多级自监督学习的跨模态对比表示网络,用于红外和可见光图像融合

IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Digital Signal Processing Pub Date : 2024-11-06 DOI:10.1016/j.dsp.2024.104853
Zhilin Yan , Rencan Nie , Jinde Cao , Guangxu Xie , Zhengze Ding
{"title":"MSL-CCRN:基于多级自监督学习的跨模态对比表示网络,用于红外和可见光图像融合","authors":"Zhilin Yan ,&nbsp;Rencan Nie ,&nbsp;Jinde Cao ,&nbsp;Guangxu Xie ,&nbsp;Zhengze Ding","doi":"10.1016/j.dsp.2024.104853","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion (IVIF) facing different information in two modal scenarios, the focus of research is to better extract different information. In this work, we propose a multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion (MSL-CCRN). Firstly, considering that the scene differences between different modalities affect the fusion of cross-modal images, we propose a contrastive representation network (CRN). CRN enhances the interaction between the fused image and the source image, and significantly improves the similarity between the meaningful features in each modality and the fused image. Secondly, due to the lack of ground truth in IVIF, the quality of directly obtained fused image is seriously affected. We design a multi-stage fusion strategy to address the loss of important information in this process. Notably, our method is a self-supervised network. In fusion stage I, we reconstruct the initial fused image as the new view of fusion stage II. In fusion stage II, we use the fused image obtained in the previous stage to carry out three-view contrastive representation, thereby constraining the feature extraction of the source image. This makes the final fused image introduce more important information in the source image. Through a large number of qualitative, quantitative experiments and downstream object detection experiments, our propose method shows excellent performance compared with most advanced methods.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104853"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MSL-CCRN: Multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion\",\"authors\":\"Zhilin Yan ,&nbsp;Rencan Nie ,&nbsp;Jinde Cao ,&nbsp;Guangxu Xie ,&nbsp;Zhengze Ding\",\"doi\":\"10.1016/j.dsp.2024.104853\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Infrared and visible image fusion (IVIF) facing different information in two modal scenarios, the focus of research is to better extract different information. In this work, we propose a multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion (MSL-CCRN). Firstly, considering that the scene differences between different modalities affect the fusion of cross-modal images, we propose a contrastive representation network (CRN). CRN enhances the interaction between the fused image and the source image, and significantly improves the similarity between the meaningful features in each modality and the fused image. Secondly, due to the lack of ground truth in IVIF, the quality of directly obtained fused image is seriously affected. We design a multi-stage fusion strategy to address the loss of important information in this process. Notably, our method is a self-supervised network. In fusion stage I, we reconstruct the initial fused image as the new view of fusion stage II. In fusion stage II, we use the fused image obtained in the previous stage to carry out three-view contrastive representation, thereby constraining the feature extraction of the source image. This makes the final fused image introduce more important information in the source image. Through a large number of qualitative, quantitative experiments and downstream object detection experiments, our propose method shows excellent performance compared with most advanced methods.</div></div>\",\"PeriodicalId\":51011,\"journal\":{\"name\":\"Digital Signal Processing\",\"volume\":\"156 \",\"pages\":\"Article 104853\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1051200424004780\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200424004780","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

红外图像与可见光图像融合(IVIF)面临着两种模式下的不同信息,研究的重点是如何更好地提取不同的信息。在这项工作中,我们提出了一种基于多阶段自监督学习的红外与可见光图像融合的跨模态对比表示网络(MSL-CCRN)。首先,考虑到不同模态之间的场景差异会影响跨模态图像的融合,我们提出了一种对比性表示网络(CRN)。CRN 增强了融合图像与源图像之间的交互,并显著提高了各模态有意义特征与融合图像之间的相似性。其次,由于 IVIF 缺乏地面实况,直接获得的融合图像质量受到严重影响。我们设计了一种多阶段融合策略来解决这一过程中重要信息丢失的问题。值得注意的是,我们的方法是一种自监督网络。在融合阶段 I,我们重建初始融合图像作为融合阶段 II 的新视图。在融合阶段 II 中,我们使用前一阶段获得的融合图像进行三视图对比表示,从而约束源图像的特征提取。这使得最终的融合图像引入了源图像中更多的重要信息。通过大量的定性、定量实验和下游物体检测实验,我们提出的方法与大多数先进方法相比表现出了卓越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MSL-CCRN: Multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion
Infrared and visible image fusion (IVIF) facing different information in two modal scenarios, the focus of research is to better extract different information. In this work, we propose a multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion (MSL-CCRN). Firstly, considering that the scene differences between different modalities affect the fusion of cross-modal images, we propose a contrastive representation network (CRN). CRN enhances the interaction between the fused image and the source image, and significantly improves the similarity between the meaningful features in each modality and the fused image. Secondly, due to the lack of ground truth in IVIF, the quality of directly obtained fused image is seriously affected. We design a multi-stage fusion strategy to address the loss of important information in this process. Notably, our method is a self-supervised network. In fusion stage I, we reconstruct the initial fused image as the new view of fusion stage II. In fusion stage II, we use the fused image obtained in the previous stage to carry out three-view contrastive representation, thereby constraining the feature extraction of the source image. This makes the final fused image introduce more important information in the source image. Through a large number of qualitative, quantitative experiments and downstream object detection experiments, our propose method shows excellent performance compared with most advanced methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Digital Signal Processing
Digital Signal Processing 工程技术-工程:电子与电气
CiteScore
5.30
自引率
17.20%
发文量
435
审稿时长
66 days
期刊介绍: Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal. The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as: • big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,
期刊最新文献
Adaptive polarimetric persymmetric detection for distributed subspace targets in lognormal texture clutter MFFR-net: Multi-scale feature fusion and attentive recalibration network for deep neural speech enhancement PV-YOLO: A lightweight pedestrian and vehicle detection model based on improved YOLOv8 Efficient recurrent real video restoration IGGCN: Individual-guided graph convolution network for pedestrian trajectory prediction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1