HAMSA: Hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution

IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Digital Signal Processing Pub Date : 2025-02-25 DOI:10.1016/j.dsp.2025.105098
Hanguang Xiao , Hao Wen , Xin Wang , Kun Zuo , Tianqi Liu , Wei Wang , Yong Xu
{"title":"HAMSA: Hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution","authors":"Hanguang Xiao ,&nbsp;Hao Wen ,&nbsp;Xin Wang ,&nbsp;Kun Zuo ,&nbsp;Tianqi Liu ,&nbsp;Wei Wang ,&nbsp;Yong Xu","doi":"10.1016/j.dsp.2025.105098","DOIUrl":null,"url":null,"abstract":"<div><div>Video Super-Resolution (VSR) aims to enhance the resolution of video frames by utilizing multiple adjacent low-resolution frames. For across-frame information extraction, most existing methods usually employ the optical flow or learned offsets through deformable convolution to perform alignment. However, due to the complexity of real-world motions, the estimating of flow or motion offsets can be inaccurate while challenging. To address this problem, we propose a novel hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution, named HAMSA. The proposed HAMSA adopts a U-shaped architecture to achieve progressive alignment using a multi-scale manner. Specifically, we develop a hybrid attention transformer (HAT) feature extraction module, which uses the proposed channel motion attention (CMA) to extract features that facilitate inter-frame alignment. Second, we first design a U-shaped multi-scale feature alignment (MSFA) module that ensures precise motion estimation between different frames by starting from large-scale features, gradually aligning them to smaller scales, and then restoring them using skip connections and upsampling. In addition, to further refine the alignment process, we introduce a non-local feature aggregation (NLFA) module, which serves to apply non-local operations to minimize alignment errors and enhance the detail fidelity, thereby improving the overall quality of the super-resolved video frames. Extensive experiments on the Vid4, Vimeo90k-T, and REDS4 datasets demonstrate that our HAMSA achieves superior VSR performance compared to other state-of-the-art (SOTA) methods while maintaining a good balance between model size and performance.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105098"},"PeriodicalIF":3.0000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200425001204","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Video Super-Resolution (VSR) aims to enhance the resolution of video frames by utilizing multiple adjacent low-resolution frames. For across-frame information extraction, most existing methods usually employ the optical flow or learned offsets through deformable convolution to perform alignment. However, due to the complexity of real-world motions, the estimating of flow or motion offsets can be inaccurate while challenging. To address this problem, we propose a novel hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution, named HAMSA. The proposed HAMSA adopts a U-shaped architecture to achieve progressive alignment using a multi-scale manner. Specifically, we develop a hybrid attention transformer (HAT) feature extraction module, which uses the proposed channel motion attention (CMA) to extract features that facilitate inter-frame alignment. Second, we first design a U-shaped multi-scale feature alignment (MSFA) module that ensures precise motion estimation between different frames by starting from large-scale features, gradually aligning them to smaller scales, and then restoring them using skip connections and upsampling. In addition, to further refine the alignment process, we introduce a non-local feature aggregation (NLFA) module, which serves to apply non-local operations to minimize alignment errors and enhance the detail fidelity, thereby improving the overall quality of the super-resolved video frames. Extensive experiments on the Vid4, Vimeo90k-T, and REDS4 datasets demonstrate that our HAMSA achieves superior VSR performance compared to other state-of-the-art (SOTA) methods while maintaining a good balance between model size and performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于视频超分辨率的混合注意力转换器和多尺度对齐聚合网络
视频超分辨率(VSR)是利用多个相邻的低分辨率帧来提高视频帧的分辨率。对于跨帧信息提取,现有方法大多采用光流或通过可变形卷积学习到的偏移量进行对齐。然而,由于现实世界运动的复杂性,流或运动偏移的估计可能是不准确的,同时具有挑战性。为了解决这一问题,我们提出了一种新的用于视频超分辨率的混合注意力转换器和多尺度对齐聚合网络,称为HAMSA。拟议的HAMSA采用u形结构,以多尺度方式实现渐进式对齐。具体来说,我们开发了一个混合注意转换器(HAT)特征提取模块,该模块使用所提出的通道运动注意(CMA)来提取有利于帧间对齐的特征。其次,我们首先设计了一个u型多尺度特征对齐(MSFA)模块,该模块通过从大规模特征开始,逐渐将它们对齐到较小的尺度,然后使用跳过连接和上采样恢复它们来确保不同帧之间的精确运动估计。此外,为了进一步细化对齐过程,我们引入了非局部特征聚合(NLFA)模块,该模块用于应用非局部操作来最小化对齐误差并提高细节保真度,从而提高超分辨视频帧的整体质量。在Vid4, Vimeo90k-T和REDS4数据集上进行的大量实验表明,与其他最先进的(SOTA)方法相比,我们的HAMSA实现了卓越的VSR性能,同时保持了模型大小和性能之间的良好平衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Digital Signal Processing
Digital Signal Processing 工程技术-工程:电子与电气
CiteScore
5.30
自引率
17.20%
发文量
435
审稿时长
66 days
期刊介绍: Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal. The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as: • big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,
期刊最新文献
Zero-reference illumination estimation model for image enhancement in underground mines Lightweight speech enhancement with state-space model and depthwise separable convolution A visual security image encryption algorithm based on 1D-CHCCM and super-resolution reconstruction No-reference magnetic resonance image quality assessment via local-global feature integration Low Complexity estimation of fractional delay-Doppler-Angle parameters in MIMO-OTFS ISAC system
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1