Semantic guidance incremental network for efficiency video super-resolution

Xiaonan He, Yukun Xia, Yuansong Qiao, Brian Lee, Yuhang Ye
{"title":"Semantic guidance incremental network for efficiency video super-resolution","authors":"Xiaonan He, Yukun Xia, Yuansong Qiao, Brian Lee, Yuhang Ye","doi":"10.1007/s00371-024-03488-y","DOIUrl":null,"url":null,"abstract":"<p>In video streaming, bandwidth constraints significantly affect client-side video quality. Addressing this, deep neural networks offer a promising avenue for implementing video super-resolution (VSR) at the user end, leveraging advancements in modern hardware, including mobile devices. The principal challenge in VSR is the computational intensity involved in processing temporal/spatial video data. Conventional methods, uniformly processing entire scenes, often result in inefficient resource allocation. This is evident in the over-processing of simpler regions and insufficient attention to complex regions, leading to edge artifacts in merged regions. Our innovative approach employs semantic segmentation and spatial frequency-based categorization to divide each video frame into regions of varying complexity: simple, medium, and complex. These are then processed through an efficient incremental model, optimizing computational resources. A key innovation is the sparse temporal/spatial feature transformation layer, which mitigates edge artifacts and ensures seamless integration of regional features, enhancing the naturalness of the super-resolution outcome. Experimental results demonstrate that our method significantly boosts VSR efficiency while maintaining effectiveness. This marks a notable advancement in streaming video technology, optimizing video quality with reduced computational demands. This approach, featuring semantic segmentation, spatial frequency analysis, and an incremental network structure, represents a substantial improvement over traditional VSR methodologies, addressing the core challenges of efficiency and quality in high-resolution video streaming.\n</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03488-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In video streaming, bandwidth constraints significantly affect client-side video quality. Addressing this, deep neural networks offer a promising avenue for implementing video super-resolution (VSR) at the user end, leveraging advancements in modern hardware, including mobile devices. The principal challenge in VSR is the computational intensity involved in processing temporal/spatial video data. Conventional methods, uniformly processing entire scenes, often result in inefficient resource allocation. This is evident in the over-processing of simpler regions and insufficient attention to complex regions, leading to edge artifacts in merged regions. Our innovative approach employs semantic segmentation and spatial frequency-based categorization to divide each video frame into regions of varying complexity: simple, medium, and complex. These are then processed through an efficient incremental model, optimizing computational resources. A key innovation is the sparse temporal/spatial feature transformation layer, which mitigates edge artifacts and ensures seamless integration of regional features, enhancing the naturalness of the super-resolution outcome. Experimental results demonstrate that our method significantly boosts VSR efficiency while maintaining effectiveness. This marks a notable advancement in streaming video technology, optimizing video quality with reduced computational demands. This approach, featuring semantic segmentation, spatial frequency analysis, and an incremental network structure, represents a substantial improvement over traditional VSR methodologies, addressing the core challenges of efficiency and quality in high-resolution video streaming.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于高效视频超分辨率的语义引导增量网络
在视频流中,带宽限制严重影响了客户端的视频质量。为解决这一问题,深度神经网络利用现代硬件(包括移动设备)的进步,为在用户端实现视频超分辨率(VSR)提供了一条大有可为的途径。VSR 面临的主要挑战是处理时间/空间视频数据的计算强度。对整个场景进行统一处理的传统方法往往导致资源分配效率低下。这表现在对较简单区域的过度处理和对复杂区域的关注不够,导致合并区域出现边缘伪影。我们的创新方法采用语义分割和基于空间频率的分类,将每个视频帧划分为不同复杂度的区域:简单、中等和复杂。然后通过高效的增量模型对这些区域进行处理,从而优化计算资源。稀疏的时间/空间特征转换层是一个关键的创新点,它可以减少边缘伪影,确保区域特征的无缝整合,提高超分辨率结果的自然度。实验结果表明,我们的方法在保持有效性的同时,显著提高了 VSR 的效率。这标志着流媒体视频技术的显著进步,在降低计算需求的同时优化了视频质量。这种方法以语义分割、空间频率分析和增量网络结构为特色,与传统的 VSR 方法相比有了很大改进,解决了高分辨率视频流在效率和质量方面的核心难题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Predicting pancreatic diseases from fundus images using deep learning A modal fusion network with dual attention mechanism for 6D pose estimation Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection HCT-Unet: multi-target medical image segmentation via a hybrid CNN-transformer Unet incorporating multi-axis gated multi-layer perceptron HASN: hybrid attention separable network for efficient image super-resolution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1