{"title":"Autoencoder-based image fusion network with enhanced channels and feature saliency","authors":"Hongmei Wang , Xuanyu Lu , Ze Li","doi":"10.1016/j.ijleo.2024.172104","DOIUrl":null,"url":null,"abstract":"<div><div>The existing deep learning based infrared and visible image fusion technologies have made significant progress, but there are still many problems need to be solved, such as information loss (targets and texture, etc.) of both infrared and visible images, noise and artifacts existing in fused image. To address these issues in fusion, an infrared and visible image fusion method based on autoencoder network is proposed in this paper. Firstly, novel enhanced channels are designed and input parallelly with source images into the network to enhance the specific features and reduce information loss in feature fusion. Then, the feature maps are obtained by the encoder. Next, a feature fusion method based on feature saliency is proposed, using a pre-trained classifier to measure the saliency of features, and the fused image is obtained by the decoder finally. Experimental results demonstrate that the targets are obvious and the textures are plentiful in the fused images generated by the proposed method. Also, the objective metrics of the proposed method are higher than the state of the art methods, which demonstrate that the proposed method is effective to fuse the infrared and visible images.</div></div>","PeriodicalId":19513,"journal":{"name":"Optik","volume":"319 ","pages":"Article 172104"},"PeriodicalIF":3.1000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optik","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030402624005035","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0
Abstract
The existing deep learning based infrared and visible image fusion technologies have made significant progress, but there are still many problems need to be solved, such as information loss (targets and texture, etc.) of both infrared and visible images, noise and artifacts existing in fused image. To address these issues in fusion, an infrared and visible image fusion method based on autoencoder network is proposed in this paper. Firstly, novel enhanced channels are designed and input parallelly with source images into the network to enhance the specific features and reduce information loss in feature fusion. Then, the feature maps are obtained by the encoder. Next, a feature fusion method based on feature saliency is proposed, using a pre-trained classifier to measure the saliency of features, and the fused image is obtained by the decoder finally. Experimental results demonstrate that the targets are obvious and the textures are plentiful in the fused images generated by the proposed method. Also, the objective metrics of the proposed method are higher than the state of the art methods, which demonstrate that the proposed method is effective to fuse the infrared and visible images.
期刊介绍:
Optik publishes articles on all subjects related to light and electron optics and offers a survey on the state of research and technical development within the following fields:
Optics:
-Optics design, geometrical and beam optics, wave optics-
Optical and micro-optical components, diffractive optics, devices and systems-
Photoelectric and optoelectronic devices-
Optical properties of materials, nonlinear optics, wave propagation and transmission in homogeneous and inhomogeneous materials-
Information optics, image formation and processing, holographic techniques, microscopes and spectrometer techniques, and image analysis-
Optical testing and measuring techniques-
Optical communication and computing-
Physiological optics-
As well as other related topics.