Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary

IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Signal Processing-Image Communication Pub Date : 2024-09-19 DOI:10.1016/j.image.2024.117213
Sidi He , Chengfang Zhang , Haoyue Li , Ziliang Feng
{"title":"Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary","authors":"Sidi He ,&nbsp;Chengfang Zhang ,&nbsp;Haoyue Li ,&nbsp;Ziliang Feng","doi":"10.1016/j.image.2024.117213","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117213"},"PeriodicalIF":3.4000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596524001140","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用基于样本依赖字典的在线卷积稀疏编码改进多焦图像融合
多焦点图像融合将从场景的不同焦点区域捕捉到的多幅图像合并在一起,生成一幅全焦点图像。卷积稀疏编码(CSC)方法通常用于精确提取聚焦区域,但它们往往不考虑计算成本。为了克服这一问题,人们引入了在线卷积稀疏编码(OCSC)技术,但其性能仍然受到所用滤波器数量的限制,从而对整体性能产生负面影响。为了解决这些限制,有人提出了一种称为基于采样依赖字典的在线卷积稀疏编码(SCSC)的新方法。SCSC 可以利用额外的滤波器,同时保持较低的时间和空间复杂度,以处理高维或大型数据。利用 SCSC 的计算效率和有效的全局特征提取,我们提出了一种用于多焦点图像融合的新方法。我们的方法包括对每幅源图像进行两层分解,产生一个捕捉主要特征的基础层和一个包含更精细细节的细节层。融合后的基础层和细节层最终重建出最终图像。所提出的方法大大减少了伪影,保留了焦点边界的精细细节,在视觉质量和多焦点图像融合的客观评估方面都有明显的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Signal Processing-Image Communication
Signal Processing-Image Communication 工程技术-工程:电子与电气
CiteScore
8.40
自引率
2.90%
发文量
138
审稿时长
5.2 months
期刊介绍: Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following: To present a forum for the advancement of theory and practice of image communication. To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems. To contribute to a rapid information exchange between the industrial and academic environments. The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world. Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments. Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.
期刊最新文献
SES-ReNet: Lightweight deep learning model for human detection in hazy weather conditions HOI-V: One-stage human-object interaction detection based on multi-feature fusion in videos Text in the dark: Extremely low-light text image enhancement High efficiency deep image compression via channel-wise scale adaptive latent representation learning Double supervision for scene text detection and recognition based on BMINet
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1