基于多尺度导光流的视频超分辨率重采样

IF 4.9 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Computers & Electrical Engineering Pub Date : 2025-04-01 Epub Date: 2025-02-11 DOI:10.1016/j.compeleceng.2025.110176
Puying Li, Fuzhen Zhu, Yong Liu, Qi Zhang
{"title":"基于多尺度导光流的视频超分辨率重采样","authors":"Puying Li,&nbsp;Fuzhen Zhu,&nbsp;Yong Liu,&nbsp;Qi Zhang","doi":"10.1016/j.compeleceng.2025.110176","DOIUrl":null,"url":null,"abstract":"<div><div>Existing video super-resolution (VSR) methods are inadequate for dealing with inter-frame motion and spatial distortion problems, especially in high-motion scenes, which tend to lead to loss of details and degradation of reconstruction quality. To address these challenges, this paper puts forward a resampling video super-resolution algorithm based on multiscale guided optical flow. The method combines multi-scale guided optical flow estimation to address the issue of inter-frame motion and a resampling deformable convolution module to address the issue of spatial distortion. Specifically, features are first extracted from low-quality video frames using a convolutional layer, followed by feature extraction with Residual Swin Transformer Blocks (RSTBs). In the feature alignment module, a multiscale-guided optical flow estimation approach is employed, which addresses the inter-frame motion problem across different video segments and performs video frame interpolation and super-resolution reconstruction simultaneously. Furthermore, spatial alignment is achieved by integrating resampling into the deformable convolution module, mitigating spatial distortion. Finally, multiple Residual Swin Transformer Blocks (RSTBs) are used to extract and fuse features, and pixel rearrangement layers are employed to reconstruct high-quality video frames. The experimental results on the REDS, Vid4, and UDM10 datasets show that our method significantly outperforms current state-of-the-art (SOTA) techniques, with improvements of 0.61 dB in Peak Signal-to-Noise Ratio (PSNR) and 0.0121 in Structural Similarity (SSIM), validating the effectiveness and superiority of the method.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"123 ","pages":"Article 110176"},"PeriodicalIF":4.9000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Resampling video super-resolution based on multi-scale guided optical flow\",\"authors\":\"Puying Li,&nbsp;Fuzhen Zhu,&nbsp;Yong Liu,&nbsp;Qi Zhang\",\"doi\":\"10.1016/j.compeleceng.2025.110176\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Existing video super-resolution (VSR) methods are inadequate for dealing with inter-frame motion and spatial distortion problems, especially in high-motion scenes, which tend to lead to loss of details and degradation of reconstruction quality. To address these challenges, this paper puts forward a resampling video super-resolution algorithm based on multiscale guided optical flow. The method combines multi-scale guided optical flow estimation to address the issue of inter-frame motion and a resampling deformable convolution module to address the issue of spatial distortion. Specifically, features are first extracted from low-quality video frames using a convolutional layer, followed by feature extraction with Residual Swin Transformer Blocks (RSTBs). In the feature alignment module, a multiscale-guided optical flow estimation approach is employed, which addresses the inter-frame motion problem across different video segments and performs video frame interpolation and super-resolution reconstruction simultaneously. Furthermore, spatial alignment is achieved by integrating resampling into the deformable convolution module, mitigating spatial distortion. Finally, multiple Residual Swin Transformer Blocks (RSTBs) are used to extract and fuse features, and pixel rearrangement layers are employed to reconstruct high-quality video frames. The experimental results on the REDS, Vid4, and UDM10 datasets show that our method significantly outperforms current state-of-the-art (SOTA) techniques, with improvements of 0.61 dB in Peak Signal-to-Noise Ratio (PSNR) and 0.0121 in Structural Similarity (SSIM), validating the effectiveness and superiority of the method.</div></div>\",\"PeriodicalId\":50630,\"journal\":{\"name\":\"Computers & Electrical Engineering\",\"volume\":\"123 \",\"pages\":\"Article 110176\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Electrical Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0045790625001193\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/2/11 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625001193","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/11 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

现有的视频超分辨率(VSR)方法不足以处理帧间运动和空间失真问题,特别是在高运动场景中,容易导致细节丢失和重建质量下降。针对这些问题,本文提出了一种基于多尺度导光流的重采样视频超分辨率算法。该方法结合多尺度引导光流估计来解决帧间运动问题,结合重采样可变形卷积模块来解决空间畸变问题。具体来说,首先使用卷积层从低质量视频帧中提取特征,然后使用残余Swin变压器块(RSTBs)进行特征提取。在特征对准模块中,采用多尺度引导光流估计方法,解决了不同视频段的帧间运动问题,同时进行视频帧插值和超分辨率重建。此外,通过将重采样集成到可变形卷积模块中来实现空间对齐,从而减轻了空间畸变。最后,利用多个残频变压器块(Residual Swin Transformer block, RSTBs)提取和融合特征,利用像素重排层重构高质量视频帧。在REDS、Vid4和UDM10数据集上的实验结果表明,我们的方法显著优于当前最先进的(SOTA)技术,峰值信噪比(PSNR)提高了0.61 dB,结构相似度(SSIM)提高了0.0121 dB,验证了该方法的有效性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Resampling video super-resolution based on multi-scale guided optical flow
Existing video super-resolution (VSR) methods are inadequate for dealing with inter-frame motion and spatial distortion problems, especially in high-motion scenes, which tend to lead to loss of details and degradation of reconstruction quality. To address these challenges, this paper puts forward a resampling video super-resolution algorithm based on multiscale guided optical flow. The method combines multi-scale guided optical flow estimation to address the issue of inter-frame motion and a resampling deformable convolution module to address the issue of spatial distortion. Specifically, features are first extracted from low-quality video frames using a convolutional layer, followed by feature extraction with Residual Swin Transformer Blocks (RSTBs). In the feature alignment module, a multiscale-guided optical flow estimation approach is employed, which addresses the inter-frame motion problem across different video segments and performs video frame interpolation and super-resolution reconstruction simultaneously. Furthermore, spatial alignment is achieved by integrating resampling into the deformable convolution module, mitigating spatial distortion. Finally, multiple Residual Swin Transformer Blocks (RSTBs) are used to extract and fuse features, and pixel rearrangement layers are employed to reconstruct high-quality video frames. The experimental results on the REDS, Vid4, and UDM10 datasets show that our method significantly outperforms current state-of-the-art (SOTA) techniques, with improvements of 0.61 dB in Peak Signal-to-Noise Ratio (PSNR) and 0.0121 in Structural Similarity (SSIM), validating the effectiveness and superiority of the method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Electrical Engineering
Computers & Electrical Engineering 工程技术-工程:电子与电气
CiteScore
9.20
自引率
7.00%
发文量
661
审稿时长
47 days
期刊介绍: The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency. Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.
期刊最新文献
Real-time monitoring and early warning for mitigation of sub-synchronous oscillations in wind Farms via supplementary damping controller and XGBoost-driven severity prediction EVerGen: Optimal path planning for electric vehicle using modified genetic algorithm in internet of vehicular things SRAM PUF-based logic locking framework for IoT authentication and IP protection Neuroauthnet: A brainwave-based authentication framework using BCI and deep learning for privacy-preserving identity verification A novel hybrid gated recurrent ensemble model for stage-based clinical diagnosis of neurological disorders
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1