AFIRE:用于在异构成像环境中提取适应光照的特征的自适应 FusionNet

IF 3.1 3区 物理与天体物理 Q2 INSTRUMENTS & INSTRUMENTATION Infrared Physics & Technology Pub Date : 2024-09-16 DOI:10.1016/j.infrared.2024.105557
Mingxin Yu , Xufan Miao , Yichen Sun , Yuchen Bai , Lianqing Zhu
{"title":"AFIRE:用于在异构成像环境中提取适应光照的特征的自适应 FusionNet","authors":"Mingxin Yu ,&nbsp;Xufan Miao ,&nbsp;Yichen Sun ,&nbsp;Yuchen Bai ,&nbsp;Lianqing Zhu","doi":"10.1016/j.infrared.2024.105557","DOIUrl":null,"url":null,"abstract":"<div><p>The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: <span><span>https://www.github.com/ISCLab-Bistu/AFIRE</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AFIRE: Adaptive FusionNet for illumination-robust feature extraction in heterogeneous imaging environments\",\"authors\":\"Mingxin Yu ,&nbsp;Xufan Miao ,&nbsp;Yichen Sun ,&nbsp;Yuchen Bai ,&nbsp;Lianqing Zhu\",\"doi\":\"10.1016/j.infrared.2024.105557\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: <span><span>https://www.github.com/ISCLab-Bistu/AFIRE</span><svg><path></path></svg></span>.</p></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449524004419\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524004419","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

摘要

红外图像和可见光图像的融合旨在通过利用每种模式的不同特征,合成包含更丰富信息的融合图像。然而,输入图像在红外光和可见光方面的不同质量严重影响了融合性能。为了解决这个问题,我们提出了一种新颖的深度自适应融合方法,称为 "光照稳定特征提取自适应融合网络(AFIRE)"。该方法涉及两个输入特征的交互式处理,并根据不同的光照条件动态调整融合权重。具体来说,我们在红外光和可见光的特征提取阶段引入了一种新颖的交互式提取结构,从而能够捕捉到更多互补信息。此外,我们还设计了一个深度自适应融合模块,用于评估输入特征的质量,并通过通道关注机制执行加权融合。最后,我们结合输入图像的熵和中值制定了一个新的损失函数,用于指导融合网络的训练。大量实验证明,AFIRE 在保留像素强度分布和纹理细节方面优于最先进的方法。源代码见:https://www.github.com/ISCLab-Bistu/AFIRE。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AFIRE: Adaptive FusionNet for illumination-robust feature extraction in heterogeneous imaging environments

The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: https://www.github.com/ISCLab-Bistu/AFIRE.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
12.10%
发文量
400
审稿时长
67 days
期刊介绍: The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region. Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine. Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.
期刊最新文献
Intermediate state between steady and breathing solitons in fiber lasers Improving the thermochromic performance of VO2 films by embedding Cu-Al nanoparticles as heterogeneous nucleation cores in the VO2/VO2 bilayer structure Dielectric-elastomer-driven long-wave infrared Alvarez lenses for continuous zooming imaging An improved infrared polarization model considering the volume scattering effect for coating materials Gate-tunable in-sensor computing vdW heterostructures for infrared photodetection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1