Yuning Xie, Gang Liu, Rui-long Xu, D. P. Bavirisetti, Haojie Tang, Mengliang Xing
{"title":"R2F-UGCGAN:一种基于区域融合因子的联合梯度和对比度生成对抗网络,用于红外和可见光图像融合","authors":"Yuning Xie, Gang Liu, Rui-long Xu, D. P. Bavirisetti, Haojie Tang, Mengliang Xing","doi":"10.1080/09500340.2023.2174358","DOIUrl":null,"url":null,"abstract":"ABSTRACT To efficiently preserve texture and target information in source images, an image fusion algorithm of Regional Fusion Factor-Based Union Gradient and Contrast Generative Adversarial Network (R2F-UGCGAN) is proposed. Firstly, an adaptive gradient diffusion (AGD) decomposition algorithm is designed to extract representative features. A pair of infrared (IR) and visible (VIS) images are decomposed by AGD to obtain low-frequency components with salient targets and high-frequency components with rich edge gradient information. Secondly, In the high-frequency components, principal component analysis (PCA) is used for fusion to obtain more detailed images with texture gradients. R2F-UGCGAN is used to fuse the low-frequency components, which can effectively ensure good consistency between the target region and the background region. Therefore, a fused image is produced, which inherits more thermal radiation information and important texture details. Finally, subjective and objective comparison experiments are performed on TNO and RoadScene datasets with state-of-the-art image fusion methods. The experimental results of R2F-UGCGAN are prominent and consistent compared to these fusion algorithms in terms of both subjective and objective evaluation.","PeriodicalId":16426,"journal":{"name":"Journal of Modern Optics","volume":"70 1","pages":"52 - 68"},"PeriodicalIF":1.2000,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"R2F-UGCGAN: a regional fusion factor-based union gradient and contrast generative adversarial network for infrared and visible image fusion\",\"authors\":\"Yuning Xie, Gang Liu, Rui-long Xu, D. P. Bavirisetti, Haojie Tang, Mengliang Xing\",\"doi\":\"10.1080/09500340.2023.2174358\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT To efficiently preserve texture and target information in source images, an image fusion algorithm of Regional Fusion Factor-Based Union Gradient and Contrast Generative Adversarial Network (R2F-UGCGAN) is proposed. Firstly, an adaptive gradient diffusion (AGD) decomposition algorithm is designed to extract representative features. A pair of infrared (IR) and visible (VIS) images are decomposed by AGD to obtain low-frequency components with salient targets and high-frequency components with rich edge gradient information. Secondly, In the high-frequency components, principal component analysis (PCA) is used for fusion to obtain more detailed images with texture gradients. R2F-UGCGAN is used to fuse the low-frequency components, which can effectively ensure good consistency between the target region and the background region. Therefore, a fused image is produced, which inherits more thermal radiation information and important texture details. Finally, subjective and objective comparison experiments are performed on TNO and RoadScene datasets with state-of-the-art image fusion methods. The experimental results of R2F-UGCGAN are prominent and consistent compared to these fusion algorithms in terms of both subjective and objective evaluation.\",\"PeriodicalId\":16426,\"journal\":{\"name\":\"Journal of Modern Optics\",\"volume\":\"70 1\",\"pages\":\"52 - 68\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2023-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Modern Optics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1080/09500340.2023.2174358\",\"RegionNum\":4,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Modern Optics","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1080/09500340.2023.2174358","RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPTICS","Score":null,"Total":0}
R2F-UGCGAN: a regional fusion factor-based union gradient and contrast generative adversarial network for infrared and visible image fusion
ABSTRACT To efficiently preserve texture and target information in source images, an image fusion algorithm of Regional Fusion Factor-Based Union Gradient and Contrast Generative Adversarial Network (R2F-UGCGAN) is proposed. Firstly, an adaptive gradient diffusion (AGD) decomposition algorithm is designed to extract representative features. A pair of infrared (IR) and visible (VIS) images are decomposed by AGD to obtain low-frequency components with salient targets and high-frequency components with rich edge gradient information. Secondly, In the high-frequency components, principal component analysis (PCA) is used for fusion to obtain more detailed images with texture gradients. R2F-UGCGAN is used to fuse the low-frequency components, which can effectively ensure good consistency between the target region and the background region. Therefore, a fused image is produced, which inherits more thermal radiation information and important texture details. Finally, subjective and objective comparison experiments are performed on TNO and RoadScene datasets with state-of-the-art image fusion methods. The experimental results of R2F-UGCGAN are prominent and consistent compared to these fusion algorithms in terms of both subjective and objective evaluation.
期刊介绍:
The journal (under its former title Optica Acta) was founded in 1953 - some years before the advent of the laser - as an international journal of optics. Since then optical research has changed greatly; fresh areas of inquiry have been explored, different techniques have been employed and the range of application has greatly increased. The journal has continued to reflect these advances as part of its steadily widening scope.
Journal of Modern Optics aims to publish original and timely contributions to optical knowledge from educational institutions, government establishments and industrial R&D groups world-wide. The whole field of classical and quantum optics is covered. Papers may deal with the applications of fundamentals of modern optics, considering both experimental and theoretical aspects of contemporary research. In addition to regular papers, there are topical and tutorial reviews, and special issues on highlighted areas.
All manuscript submissions are subject to initial appraisal by the Editor, and, if found suitable for further consideration, to peer review by independent, anonymous expert referees.
General topics covered include:
• Optical and photonic materials (inc. metamaterials)
• Plasmonics and nanophotonics
• Quantum optics (inc. quantum information)
• Optical instrumentation and technology (inc. detectors, metrology, sensors, lasers)
• Coherence, propagation, polarization and manipulation (classical optics)
• Scattering and holography (diffractive optics)
• Optical fibres and optical communications (inc. integrated optics, amplifiers)
• Vision science and applications
• Medical and biomedical optics
• Nonlinear and ultrafast optics (inc. harmonic generation, multiphoton spectroscopy)
• Imaging and Image processing