Image Super-Resolution Reconstruction Model Based on Multi-Feature Fusion

Zemiao Dai
{"title":"Image Super-Resolution Reconstruction Model Based on Multi-Feature Fusion","authors":"Zemiao Dai","doi":"10.1142/s0129156424400032","DOIUrl":null,"url":null,"abstract":"Due to the limitations of imaging equipment and image transmission conditions on daily image acquisition, the images acquired are usually low-resolution images, and it will cost a lot of time and economic costs to increase image resolution by upgrading hardware equipment. In this paper, we propose an image super-resolution reconstruction algorithm based on spatio-temporal-dependent residual network MSRN, which fuses multiple features. The algorithm uses the surface feature extraction module to extract the input features of the image, and then uses the deep residual aggregation module to adaptively learn the deep features, and then fuses multiple features and learns the global residual. Finally, the high-resolution image is obtained through the up-sampling module and the reconstruction module. In the model structure, different convolution kernels and jump connections are used to extract more high-frequency information, and spatio-temporal attention mechanism is introduced to focus on more image details. The experimental results show that compared with SRGAN, VDSR and Laplacian Pyramid SRN, the proposed algorithm finally achieves better reconstruction effect, and the image texture details are clearer under different scaling factors. In objective evaluation, the peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) of the proposed algorithm are improved compared with SRGAN.","PeriodicalId":35778,"journal":{"name":"International Journal of High Speed Electronics and Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of High Speed Electronics and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0129156424400032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

Abstract

Due to the limitations of imaging equipment and image transmission conditions on daily image acquisition, the images acquired are usually low-resolution images, and it will cost a lot of time and economic costs to increase image resolution by upgrading hardware equipment. In this paper, we propose an image super-resolution reconstruction algorithm based on spatio-temporal-dependent residual network MSRN, which fuses multiple features. The algorithm uses the surface feature extraction module to extract the input features of the image, and then uses the deep residual aggregation module to adaptively learn the deep features, and then fuses multiple features and learns the global residual. Finally, the high-resolution image is obtained through the up-sampling module and the reconstruction module. In the model structure, different convolution kernels and jump connections are used to extract more high-frequency information, and spatio-temporal attention mechanism is introduced to focus on more image details. The experimental results show that compared with SRGAN, VDSR and Laplacian Pyramid SRN, the proposed algorithm finally achieves better reconstruction effect, and the image texture details are clearer under different scaling factors. In objective evaluation, the peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) of the proposed algorithm are improved compared with SRGAN.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于多特征融合的图像超级分辨率重建模型
由于日常图像采集受成像设备和图像传输条件的限制,采集到的图像通常都是低分辨率图像,通过升级硬件设备来提高图像分辨率将耗费大量的时间和经济成本。本文提出了一种基于时空相关残差网络 MSRN 的图像超分辨率重建算法,该算法融合了多种特征。该算法利用表面特征提取模块提取图像的输入特征,然后利用深度残差聚合模块自适应学习深度特征,再融合多个特征并学习全局残差。最后,通过上采样模块和重构模块得到高分辨率图像。在模型结构上,使用不同的卷积核和跳转连接来提取更多的高频信息,并引入时空注意力机制来关注更多的图像细节。实验结果表明,与 SRGAN、VDSR 和 Laplacian Pyramid SRN 相比,所提出的算法最终达到了更好的重建效果,在不同的缩放因子下,图像纹理细节更加清晰。在客观评价方面,与 SRGAN 相比,所提算法的峰值信噪比(PSNR)和结构相似度(SSIM)都有所提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of High Speed Electronics and Systems
International Journal of High Speed Electronics and Systems Engineering-Electrical and Electronic Engineering
CiteScore
0.60
自引率
0.00%
发文量
22
期刊介绍: Launched in 1990, the International Journal of High Speed Electronics and Systems (IJHSES) has served graduate students and those in R&D, managerial and marketing positions by giving state-of-the-art data, and the latest research trends. Its main charter is to promote engineering education by advancing interdisciplinary science between electronics and systems and to explore high speed technology in photonics and electronics. IJHSES, a quarterly journal, continues to feature a broad coverage of topics relating to high speed or high performance devices, circuits and systems.
期刊最新文献
Electrical Equipment Knowledge Graph Embedding Using Language Model with Self-learned Prompts Evaluation of Dynamic and Static Balance Ability of Athletes Based on Computer Vision Technology Analysis of Joint Injury Prevention in Basketball Overload Training Based on Adjustable Embedded Systems A Comprehensive Study and Comparison of 2-Bit 7T–10T SRAM Configurations with 4-State CMOS-SWS Inverters Complete Ensemble Empirical Mode Decomposition with Adaptive Noise to Extract Deep Information of Bearing Fault in Steam Turbines via Deep Belief Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1