Wangjie Li , Xiaoyi Lv , Yaoyong Zhou , Yunling Wang , Min Li
{"title":"SeACPFusion:基于亮度感知的红外和可见光图像自适应融合网络","authors":"Wangjie Li , Xiaoyi Lv , Yaoyong Zhou , Yunling Wang , Min Li","doi":"10.1016/j.infrared.2024.105541","DOIUrl":null,"url":null,"abstract":"<div><p>Generating a single fused image that highlights important targets and preserves textural details is the aim of fusing visible and infrared images. The majority of deep learning-based fusion algorithms now in use can produce decent fusion outcomes; however, the modeling process still lacks consideration of the different amounts of information in different scenes or regions. Thus, we propose in this research SeACPFusion, a luminance-aware adaptive fusion network for infrared and visible images, which adaptively preserves the intensity information of the noticeable targets of the source images with the texture information of the background in an optimal ratio. Specifically, we design pixel-level luminance loss (PBL) to direct the fusion model’s training in real-time, and PBL retains the optimal intensity information according to the pixel luminance ratio of different source images. In addition, we designed the Channel Transformer (CTF) to consider the relationship between different attributes from the point of view of the feature channel and to focus on the key information by using the self-focusing mechanism to achieve the goal of adaptive fusion. Our extensive tests on the MSRS, RoadScene, and TNO datasets demonstrate that SeACPFusion surpasses nine representative deep learning methods on six objective metrics and achieves the best visual results in scenes such as overexposure or underexposure. In addition, the relatively efficient operation and fewer model parameters make our algorithm promising as a preprocessing module for downstream complicated vision tasks.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105541"},"PeriodicalIF":3.1000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SeACPFusion: An Adaptive Fusion Network for Infrared and Visible Images based on brightness perception\",\"authors\":\"Wangjie Li , Xiaoyi Lv , Yaoyong Zhou , Yunling Wang , Min Li\",\"doi\":\"10.1016/j.infrared.2024.105541\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Generating a single fused image that highlights important targets and preserves textural details is the aim of fusing visible and infrared images. The majority of deep learning-based fusion algorithms now in use can produce decent fusion outcomes; however, the modeling process still lacks consideration of the different amounts of information in different scenes or regions. Thus, we propose in this research SeACPFusion, a luminance-aware adaptive fusion network for infrared and visible images, which adaptively preserves the intensity information of the noticeable targets of the source images with the texture information of the background in an optimal ratio. Specifically, we design pixel-level luminance loss (PBL) to direct the fusion model’s training in real-time, and PBL retains the optimal intensity information according to the pixel luminance ratio of different source images. In addition, we designed the Channel Transformer (CTF) to consider the relationship between different attributes from the point of view of the feature channel and to focus on the key information by using the self-focusing mechanism to achieve the goal of adaptive fusion. Our extensive tests on the MSRS, RoadScene, and TNO datasets demonstrate that SeACPFusion surpasses nine representative deep learning methods on six objective metrics and achieves the best visual results in scenes such as overexposure or underexposure. In addition, the relatively efficient operation and fewer model parameters make our algorithm promising as a preprocessing module for downstream complicated vision tasks.</p></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"142 \",\"pages\":\"Article 105541\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449524004250\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524004250","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
SeACPFusion: An Adaptive Fusion Network for Infrared and Visible Images based on brightness perception
Generating a single fused image that highlights important targets and preserves textural details is the aim of fusing visible and infrared images. The majority of deep learning-based fusion algorithms now in use can produce decent fusion outcomes; however, the modeling process still lacks consideration of the different amounts of information in different scenes or regions. Thus, we propose in this research SeACPFusion, a luminance-aware adaptive fusion network for infrared and visible images, which adaptively preserves the intensity information of the noticeable targets of the source images with the texture information of the background in an optimal ratio. Specifically, we design pixel-level luminance loss (PBL) to direct the fusion model’s training in real-time, and PBL retains the optimal intensity information according to the pixel luminance ratio of different source images. In addition, we designed the Channel Transformer (CTF) to consider the relationship between different attributes from the point of view of the feature channel and to focus on the key information by using the self-focusing mechanism to achieve the goal of adaptive fusion. Our extensive tests on the MSRS, RoadScene, and TNO datasets demonstrate that SeACPFusion surpasses nine representative deep learning methods on six objective metrics and achieves the best visual results in scenes such as overexposure or underexposure. In addition, the relatively efficient operation and fewer model parameters make our algorithm promising as a preprocessing module for downstream complicated vision tasks.
期刊介绍:
The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region.
Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine.
Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.