利用改进型 DenseNet 加强红外偏振和可见光图像的三源跨模态图像融合

IF 3.1 3区 物理与天体物理 Q2 INSTRUMENTS & INSTRUMENTATION Infrared Physics & Technology Pub Date : 2024-08-19 DOI:10.1016/j.infrared.2024.105493
Xuesong Wang, Bin Zhou, Jian Peng, Feng Huang, Xianyu Wu
{"title":"利用改进型 DenseNet 加强红外偏振和可见光图像的三源跨模态图像融合","authors":"Xuesong Wang,&nbsp;Bin Zhou,&nbsp;Jian Peng,&nbsp;Feng Huang,&nbsp;Xianyu Wu","doi":"10.1016/j.infrared.2024.105493","DOIUrl":null,"url":null,"abstract":"<div><p>The fusion of multi-modal images to create an image that preserves the unique features of each modality as well as the features shared across modalities is a challenging task, particularly in the context of infrared (IR)-visible image fusion. In addition, the presence of polarization and IR radiation information in images obtained from IR polarization sensors further complicates the multi-modal image-fusion process. This study proposes a fusion network designed to overcome the challenges associated with the integration of low-resolution IR, IR polarization, and high-resolution visible (VIS) images. By introducing cross attention modules and a multi-stage fusion approach, the network can effectively extract and fuse features from different modalities, fully expressing the diversity of the images. This network learns end-to-end mapping from sourced to fused images using a loss function, eliminating the need for ground-truth images for fusion. Experimental results using public datasets and remote-sensing field-test data demonstrate that the proposed methodology achieves commendable results in qualitative and quantitative evaluations, with gradient based fusion performance <span><math><mrow><msup><mrow><mi>Q</mi></mrow><mrow><mi>AB</mi><mo>/</mo><mi>F</mi></mrow></msup></mrow></math></span>, mutual information (MI), and <span><math><mrow><msub><mi>Q</mi><mrow><mi>CB</mi></mrow></msub></mrow></math></span> values higher than the second-best values by 0.20, 0.94, and 0.04, respectively. This study provides a comprehensive representation of target scene information that results in enhanced image quality and improved object identification capabilities. In addition, outdoor and VIS image datasets are produced, providing a data foundation and reference for future research in related fields.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"141 ","pages":"Article 105493"},"PeriodicalIF":3.1000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing three-source cross-modality image fusion with improved DenseNet for infrared polarization and visible light images\",\"authors\":\"Xuesong Wang,&nbsp;Bin Zhou,&nbsp;Jian Peng,&nbsp;Feng Huang,&nbsp;Xianyu Wu\",\"doi\":\"10.1016/j.infrared.2024.105493\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The fusion of multi-modal images to create an image that preserves the unique features of each modality as well as the features shared across modalities is a challenging task, particularly in the context of infrared (IR)-visible image fusion. In addition, the presence of polarization and IR radiation information in images obtained from IR polarization sensors further complicates the multi-modal image-fusion process. This study proposes a fusion network designed to overcome the challenges associated with the integration of low-resolution IR, IR polarization, and high-resolution visible (VIS) images. By introducing cross attention modules and a multi-stage fusion approach, the network can effectively extract and fuse features from different modalities, fully expressing the diversity of the images. This network learns end-to-end mapping from sourced to fused images using a loss function, eliminating the need for ground-truth images for fusion. Experimental results using public datasets and remote-sensing field-test data demonstrate that the proposed methodology achieves commendable results in qualitative and quantitative evaluations, with gradient based fusion performance <span><math><mrow><msup><mrow><mi>Q</mi></mrow><mrow><mi>AB</mi><mo>/</mo><mi>F</mi></mrow></msup></mrow></math></span>, mutual information (MI), and <span><math><mrow><msub><mi>Q</mi><mrow><mi>CB</mi></mrow></msub></mrow></math></span> values higher than the second-best values by 0.20, 0.94, and 0.04, respectively. This study provides a comprehensive representation of target scene information that results in enhanced image quality and improved object identification capabilities. In addition, outdoor and VIS image datasets are produced, providing a data foundation and reference for future research in related fields.</p></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"141 \",\"pages\":\"Article 105493\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449524003773\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524003773","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

摘要

多模态图像的融合是一项极具挑战性的任务,尤其是在红外(IR)-可见光图像融合的背景下,如何融合多模态图像以创建既能保留各模态独特特征又能保留各模态共享特征的图像是一项极具挑战性的任务。此外,从红外偏振传感器获取的图像中存在偏振和红外辐射信息,这使得多模态图像融合过程更加复杂。本研究提出了一种融合网络,旨在克服与低分辨率红外、红外偏振和高分辨率可见光(VIS)图像融合相关的挑战。通过引入交叉注意模块和多级融合方法,该网络可以有效地提取和融合不同模态的特征,充分表达图像的多样性。该网络利用损失函数学习从源图像到融合图像的端到端映射,从而消除了融合时对地面实况图像的需求。使用公共数据集和遥感现场测试数据得出的实验结果表明,所提出的方法在定性和定量评估方面都取得了值得称赞的结果,基于梯度的融合性能 QAB/F、互信息(MI)和 QCB 值分别比次佳值高出 0.20、0.94 和 0.04。这项研究提供了目标场景信息的综合表征,从而提高了图像质量和物体识别能力。此外,还生成了室外和 VIS 图像数据集,为今后相关领域的研究提供了数据基础和参考。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Enhancing three-source cross-modality image fusion with improved DenseNet for infrared polarization and visible light images

The fusion of multi-modal images to create an image that preserves the unique features of each modality as well as the features shared across modalities is a challenging task, particularly in the context of infrared (IR)-visible image fusion. In addition, the presence of polarization and IR radiation information in images obtained from IR polarization sensors further complicates the multi-modal image-fusion process. This study proposes a fusion network designed to overcome the challenges associated with the integration of low-resolution IR, IR polarization, and high-resolution visible (VIS) images. By introducing cross attention modules and a multi-stage fusion approach, the network can effectively extract and fuse features from different modalities, fully expressing the diversity of the images. This network learns end-to-end mapping from sourced to fused images using a loss function, eliminating the need for ground-truth images for fusion. Experimental results using public datasets and remote-sensing field-test data demonstrate that the proposed methodology achieves commendable results in qualitative and quantitative evaluations, with gradient based fusion performance QAB/F, mutual information (MI), and QCB values higher than the second-best values by 0.20, 0.94, and 0.04, respectively. This study provides a comprehensive representation of target scene information that results in enhanced image quality and improved object identification capabilities. In addition, outdoor and VIS image datasets are produced, providing a data foundation and reference for future research in related fields.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
12.10%
发文量
400
审稿时长
67 days
期刊介绍: The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region. Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine. Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.
期刊最新文献
Breaking dimensional barriers in hyperspectral target detection: Atrous convolution with Gramian Angular field representations Multi-Scale convolutional neural network for finger vein recognition Temporal denoising and deep feature learning for enhanced defect detection in thermography using stacked denoising convolution autoencoder Detection of black tea fermentation quality based on optimized deep neural network and hyperspectral imaging Hyperspectral and multispectral images fusion based on pyramid swin transformer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1