Singular value decomposition and saliency - map based image fusion for visible and infrared images

C. Rajakumar, S. Satheeskumaran
{"title":"Singular value decomposition and saliency - map based image fusion for visible and infrared images","authors":"C. Rajakumar, S. Satheeskumaran","doi":"10.1080/19479832.2020.1864786","DOIUrl":null,"url":null,"abstract":"ABSTRACT Multiple sensors capture many images and these images are fused as a single image in many applications to obtain high spatial and spectral resolution. A new image fusion method is proposed in this work to enhance the fusion of infrared and visible images. Image fusion methods based on convolutional neural networks, edge-preserving filters and lower rank approximation require high computational complexity and it is very slow for complex tasks. To overcome these drawbacks, singular value decomposition (SVD) based image fusion is proposed. In SVD, accurate decomposition is performed and most of the information is packed in few singular values for a given image. Singular value decomposition decomposes the source images into base and detail layers. Visual saliency and weight map are constructed to integrate information and complimentary information into detail layers. Statistical techniques are used to fuse base layers and the fused image is a linear combination of base and detail layers. Visual inspection and fusion metrics are considered to validate the performance of image fusion. Testing the proposed method on several image pairs indicates that it is superior or comparable to the existing methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"21 - 43"},"PeriodicalIF":1.8000,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864786","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2020.1864786","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 3

Abstract

ABSTRACT Multiple sensors capture many images and these images are fused as a single image in many applications to obtain high spatial and spectral resolution. A new image fusion method is proposed in this work to enhance the fusion of infrared and visible images. Image fusion methods based on convolutional neural networks, edge-preserving filters and lower rank approximation require high computational complexity and it is very slow for complex tasks. To overcome these drawbacks, singular value decomposition (SVD) based image fusion is proposed. In SVD, accurate decomposition is performed and most of the information is packed in few singular values for a given image. Singular value decomposition decomposes the source images into base and detail layers. Visual saliency and weight map are constructed to integrate information and complimentary information into detail layers. Statistical techniques are used to fuse base layers and the fused image is a linear combination of base and detail layers. Visual inspection and fusion metrics are considered to validate the performance of image fusion. Testing the proposed method on several image pairs indicates that it is superior or comparable to the existing methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于奇异值分解和显著性映射的可见光和红外图像融合
摘要在许多应用中,多个传感器捕获许多图像,并将这些图像融合为单个图像,以获得高的空间和光谱分辨率。本文提出了一种新的图像融合方法,以增强红外和可见光图像的融合。基于卷积神经网络、边缘保持滤波器和低阶近似的图像融合方法需要很高的计算复杂度,对于复杂的任务来说速度很慢。为了克服这些缺点,提出了基于奇异值分解的图像融合方法。在SVD中,执行精确的分解,并且对于给定的图像,大多数信息被封装在少数奇异值中。奇异值分解将源图像分解为基础层和细节层。构建视觉显著性和权重图,将信息和互补信息集成到细节层中。统计技术用于融合基本层,并且融合的图像是基本层和细节层的线性组合。视觉检测和融合度量被考虑来验证图像融合的性能。在多个图像对上测试表明,该方法优于或可与现有方法相比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.00
自引率
0.00%
发文量
10
期刊介绍: International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).
期刊最新文献
CNN-based plant disease recognition using colour space models Building classification extraction from remote sensing images combining hyperpixel and maximum interclass variance PolSAR image classification based on TCN deep learning: a case study of greater Cairo Spatial enhancement of Landsat-9 land surface temperature imagery by Fourier transformation-based panchromatic fusion Underwater image contrast enhancement through an intensity-randomised approach incorporating a swarm intelligence technique with unsupervised dual-step fusion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1