SRARDA:用于图像超分辨率的轻量级自适应残差密集注意力生成对抗网络

IF 3.1 3区 物理与天体物理 Q2 Engineering Optik Pub Date : 2024-09-11 DOI:10.1016/j.ijleo.2024.172034
{"title":"SRARDA:用于图像超分辨率的轻量级自适应残差密集注意力生成对抗网络","authors":"","doi":"10.1016/j.ijleo.2024.172034","DOIUrl":null,"url":null,"abstract":"<div><p>Image super-resolution (SR) is the task of inferring a high resolution (HR) image from one/multiple single low resolution (LR) input(s). Traditional networks are evaluated by pixel-level metrics such as Peak-Signal-to-Noise Ratio (PSNR) etc., which do not always align with human perception of image quality. They often produce excessively smooth images that lack high-frequency texture and appear unnatural. Therefore, in this paper, we propose a lightweight adaptive residual dense attention generative adversarial network (SRARDA) for image SR. Firstly, our generator adopts the residual in residual (RIR) structure but redesigns the basic module. By using dynamic residual connection (ARC) to dynamically adjust the importance of residual and main paths, we design a novel adaptive residual dense attention block (ARDAB) that enhances the feature extraction capability of the generator. In addition, we build a high-frequency filtering unit (HFU) to extract more high-frequency features from the LR space. Finally, to fully utilize the discriminator, we use WGAN to compute the difference between the HR image and the reconstructed image. Experiments demonstrate that SRARDA effectively addresses the issue of excessive smoothing in reconstructed images, while also enhancing visual quality.</p></div>","PeriodicalId":19513,"journal":{"name":"Optik","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SRARDA: A lightweight adaptive residual dense attention generative adversarial network for image super-resolution\",\"authors\":\"\",\"doi\":\"10.1016/j.ijleo.2024.172034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Image super-resolution (SR) is the task of inferring a high resolution (HR) image from one/multiple single low resolution (LR) input(s). Traditional networks are evaluated by pixel-level metrics such as Peak-Signal-to-Noise Ratio (PSNR) etc., which do not always align with human perception of image quality. They often produce excessively smooth images that lack high-frequency texture and appear unnatural. Therefore, in this paper, we propose a lightweight adaptive residual dense attention generative adversarial network (SRARDA) for image SR. Firstly, our generator adopts the residual in residual (RIR) structure but redesigns the basic module. By using dynamic residual connection (ARC) to dynamically adjust the importance of residual and main paths, we design a novel adaptive residual dense attention block (ARDAB) that enhances the feature extraction capability of the generator. In addition, we build a high-frequency filtering unit (HFU) to extract more high-frequency features from the LR space. Finally, to fully utilize the discriminator, we use WGAN to compute the difference between the HR image and the reconstructed image. Experiments demonstrate that SRARDA effectively addresses the issue of excessive smoothing in reconstructed images, while also enhancing visual quality.</p></div>\",\"PeriodicalId\":19513,\"journal\":{\"name\":\"Optik\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optik\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0030402624004339\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optik","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030402624004339","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

图像超分辨率(SR)是指从一个/多个单个低分辨率(LR)输入推断出高分辨率(HR)图像的任务。传统网络通过像素级指标(如峰值信噪比(PSNR)等)进行评估,这些指标并不总是符合人类对图像质量的感知。它们生成的图像往往过于平滑,缺乏高频纹理,显得不自然。因此,我们在本文中提出了一种用于图像 SR 的轻量级自适应残差密集注意力生成对抗网络(SRARDA)。首先,我们的生成器采用残差中的残差(RIR)结构,但重新设计了基本模块。通过使用动态残差连接(ARC)来动态调整残差路径和主路径的重要性,我们设计了一个新颖的自适应残差密集注意力模块(ARDAB),增强了生成器的特征提取能力。此外,我们还建立了一个高频滤波单元(HFU),以便从 LR 空间提取更多的高频特征。最后,为了充分利用鉴别器,我们使用 WGAN 计算 HR 图像与重建图像之间的差异。实验证明,SRARDA 能有效解决重建图像过度平滑的问题,同时还能提高视觉质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SRARDA: A lightweight adaptive residual dense attention generative adversarial network for image super-resolution

Image super-resolution (SR) is the task of inferring a high resolution (HR) image from one/multiple single low resolution (LR) input(s). Traditional networks are evaluated by pixel-level metrics such as Peak-Signal-to-Noise Ratio (PSNR) etc., which do not always align with human perception of image quality. They often produce excessively smooth images that lack high-frequency texture and appear unnatural. Therefore, in this paper, we propose a lightweight adaptive residual dense attention generative adversarial network (SRARDA) for image SR. Firstly, our generator adopts the residual in residual (RIR) structure but redesigns the basic module. By using dynamic residual connection (ARC) to dynamically adjust the importance of residual and main paths, we design a novel adaptive residual dense attention block (ARDAB) that enhances the feature extraction capability of the generator. In addition, we build a high-frequency filtering unit (HFU) to extract more high-frequency features from the LR space. Finally, to fully utilize the discriminator, we use WGAN to compute the difference between the HR image and the reconstructed image. Experiments demonstrate that SRARDA effectively addresses the issue of excessive smoothing in reconstructed images, while also enhancing visual quality.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Optik
Optik 物理-光学
CiteScore
6.90
自引率
12.90%
发文量
1471
审稿时长
46 days
期刊介绍: Optik publishes articles on all subjects related to light and electron optics and offers a survey on the state of research and technical development within the following fields: Optics: -Optics design, geometrical and beam optics, wave optics- Optical and micro-optical components, diffractive optics, devices and systems- Photoelectric and optoelectronic devices- Optical properties of materials, nonlinear optics, wave propagation and transmission in homogeneous and inhomogeneous materials- Information optics, image formation and processing, holographic techniques, microscopes and spectrometer techniques, and image analysis- Optical testing and measuring techniques- Optical communication and computing- Physiological optics- As well as other related topics.
期刊最新文献
Averaging fractional Fourier domains for background noise removal applied to digital lensless holographic microscopy SRARDA: A lightweight adaptive residual dense attention generative adversarial network for image super-resolution PCB defect detection algorithm based on deep learning Generation of circular symmetric Airy vortex beams based on spin-multiplexed metasurface Studies on InZnMgO amorphous buffer layer for Cu(In,Ga)(S,Se)2 solar cell
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1