HASN:用于高效图像超分辨率的混合注意力可分离网络

Weifeng Cao, Xiaoyan Lei, Jun Shi, Wanyong Liang, Jie Liu, Zongfei Bai
{"title":"HASN:用于高效图像超分辨率的混合注意力可分离网络","authors":"Weifeng Cao, Xiaoyan Lei, Jun Shi, Wanyong Liang, Jie Liu, Zongfei Bai","doi":"10.1007/s00371-024-03610-0","DOIUrl":null,"url":null,"abstract":"<p>Recently, lightweight methods for single-image super-resolution have gained significant popularity and achieved impressive performance due to limited hardware resources. These methods demonstrate that adopting residual feature distillation is an effective way to enhance performance. However, we find that using residual connections after each block increases the model’s storage and computational cost. Therefore, to simplify the network structure and learn higher-level features and relationships between features, we use depth-wise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules. This significantly reduces computational load and the number of parameters while maintaining strong feature extraction capabilities. To further enhance model performance, we propose the hybrid attention separable block, which combines channel attention and spatial attention, thus making use of their complementary advantages. Additionally, we use depth-wise separable convolutions instead of standard convolutions, significantly reducing the computational load and the number of parameters while maintaining strong feature extraction capabilities. During the training phase, we also adopt a warm-start retraining strategy to exploit the potential of the model further. Extensive experiments demonstrate the effectiveness of our approach. Our method achieves a smaller model size and reduced computational complexity without compromising performance. Code can be available at https://github.com/nathan66666/HASN.git</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HASN: hybrid attention separable network for efficient image super-resolution\",\"authors\":\"Weifeng Cao, Xiaoyan Lei, Jun Shi, Wanyong Liang, Jie Liu, Zongfei Bai\",\"doi\":\"10.1007/s00371-024-03610-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Recently, lightweight methods for single-image super-resolution have gained significant popularity and achieved impressive performance due to limited hardware resources. These methods demonstrate that adopting residual feature distillation is an effective way to enhance performance. However, we find that using residual connections after each block increases the model’s storage and computational cost. Therefore, to simplify the network structure and learn higher-level features and relationships between features, we use depth-wise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules. This significantly reduces computational load and the number of parameters while maintaining strong feature extraction capabilities. To further enhance model performance, we propose the hybrid attention separable block, which combines channel attention and spatial attention, thus making use of their complementary advantages. Additionally, we use depth-wise separable convolutions instead of standard convolutions, significantly reducing the computational load and the number of parameters while maintaining strong feature extraction capabilities. During the training phase, we also adopt a warm-start retraining strategy to exploit the potential of the model further. Extensive experiments demonstrate the effectiveness of our approach. Our method achieves a smaller model size and reduced computational complexity without compromising performance. Code can be available at https://github.com/nathan66666/HASN.git</p>\",\"PeriodicalId\":501186,\"journal\":{\"name\":\"The Visual Computer\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Visual Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00371-024-03610-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03610-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近,用于单图像超分辨率的轻量级方法大受欢迎,并在硬件资源有限的情况下取得了令人印象深刻的性能。这些方法表明,采用残差特征蒸馏是提高性能的有效方法。然而,我们发现在每个区块后使用残差连接会增加模型的存储和计算成本。因此,为了简化网络结构,学习更高层次的特征和特征之间的关系,我们使用深度可分离卷积、全连接层和激活函数作为基本的特征提取模块。这在保持强大特征提取能力的同时,大大减少了计算负荷和参数数量。为了进一步提高模型性能,我们提出了混合注意力可分离块,它结合了通道注意力和空间注意力,从而发挥了两者的互补优势。此外,我们使用深度可分离卷积代替标准卷积,在保持强大特征提取能力的同时,大大减少了计算负荷和参数数量。在训练阶段,我们还采用了热启动再训练策略,以进一步挖掘模型的潜力。大量实验证明了我们方法的有效性。我们的方法在不影响性能的前提下,缩小了模型尺寸,降低了计算复杂度。代码见 https://github.com/nathan66666/HASN.git
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HASN: hybrid attention separable network for efficient image super-resolution

Recently, lightweight methods for single-image super-resolution have gained significant popularity and achieved impressive performance due to limited hardware resources. These methods demonstrate that adopting residual feature distillation is an effective way to enhance performance. However, we find that using residual connections after each block increases the model’s storage and computational cost. Therefore, to simplify the network structure and learn higher-level features and relationships between features, we use depth-wise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules. This significantly reduces computational load and the number of parameters while maintaining strong feature extraction capabilities. To further enhance model performance, we propose the hybrid attention separable block, which combines channel attention and spatial attention, thus making use of their complementary advantages. Additionally, we use depth-wise separable convolutions instead of standard convolutions, significantly reducing the computational load and the number of parameters while maintaining strong feature extraction capabilities. During the training phase, we also adopt a warm-start retraining strategy to exploit the potential of the model further. Extensive experiments demonstrate the effectiveness of our approach. Our method achieves a smaller model size and reduced computational complexity without compromising performance. Code can be available at https://github.com/nathan66666/HASN.git

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Advanced deepfake detection with enhanced Resnet-18 and multilayer CNN max pooling Video-driven musical composition using large language model with memory-augmented state space 3D human pose estimation using spatiotemporal hypergraphs and its public benchmark on opera videos Topological structure extraction for computing surface–surface intersection curves Lunet: an enhanced upsampling fusion network with efficient self-attention for semantic segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1