An Enhanced Video Compression Framework based on Rescaling Networks

Zhiyu Chen, L. Chen
{"title":"An Enhanced Video Compression Framework based on Rescaling Networks","authors":"Zhiyu Chen, L. Chen","doi":"10.1109/BMSB58369.2023.10211137","DOIUrl":null,"url":null,"abstract":"Recently, enhanced video compression frameworks compatible with traditional standard codecs have achieved competitive performance especially in low-bandwidth scenarios, among which the downsampling-based preprocessing networks and super-resolution based postprocessing networks are commonly applied. Surrogate networks are further employed as the replacement of non-differentiable standard codecs during training. However, the discard of minor information such as high frequency spatial textures in the process of downsampling restricts the reconstruction quality. Moreover, existing surrogate networks merely imitate the intra-frame coding structure of standard codecs without leveraging inter-frame relations. In this paper, we propose a rescaling-based enhanced video compression framework. The main video stream preserves critical spatial structures and complete temporal information, while another lightweight segment-specific enhancement stream transmitted to the decoder side is extracted and encoded from the key frame of a video segment. The high-frequency spatial information contained in the enhancement stream is further transferred to the whole segment with the guide of decoded LR frames via a Transformer-based Reconstruction Network (TRN), thus enhancing the reconstruction quality at the expense of a small bit cost. Besides, we employ a Virtual Codec Network (VCN) during training for gradients back-propagation, which is able to imitate both inter-frame and intra-frame coding characteristics of standard codecs. Experimental results indicate the superiority of the proposed approach compared with recent downsampling-based enhanced standard compatible frameworks.","PeriodicalId":13080,"journal":{"name":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","volume":"18 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE international Symposium on Broadband Multimedia Systems and Broadcasting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BMSB58369.2023.10211137","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, enhanced video compression frameworks compatible with traditional standard codecs have achieved competitive performance especially in low-bandwidth scenarios, among which the downsampling-based preprocessing networks and super-resolution based postprocessing networks are commonly applied. Surrogate networks are further employed as the replacement of non-differentiable standard codecs during training. However, the discard of minor information such as high frequency spatial textures in the process of downsampling restricts the reconstruction quality. Moreover, existing surrogate networks merely imitate the intra-frame coding structure of standard codecs without leveraging inter-frame relations. In this paper, we propose a rescaling-based enhanced video compression framework. The main video stream preserves critical spatial structures and complete temporal information, while another lightweight segment-specific enhancement stream transmitted to the decoder side is extracted and encoded from the key frame of a video segment. The high-frequency spatial information contained in the enhancement stream is further transferred to the whole segment with the guide of decoded LR frames via a Transformer-based Reconstruction Network (TRN), thus enhancing the reconstruction quality at the expense of a small bit cost. Besides, we employ a Virtual Codec Network (VCN) during training for gradients back-propagation, which is able to imitate both inter-frame and intra-frame coding characteristics of standard codecs. Experimental results indicate the superiority of the proposed approach compared with recent downsampling-based enhanced standard compatible frameworks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种基于网络缩放的增强视频压缩框架
近年来,与传统标准编解码器兼容的增强型视频压缩框架在低带宽场景下取得了较好的性能,其中常用的是基于下采样的预处理网络和基于超分辨率的后处理网络。在训练过程中,替代网络被进一步用作不可微标准编解码器的替代。但是,下采样过程中高频空间纹理等次要信息的丢弃限制了重建质量。此外,现有的替代网络只是模仿标准编解码器的帧内编码结构,而没有利用帧间关系。在本文中,我们提出了一个基于重缩放的增强视频压缩框架。主视频流保留关键的空间结构和完整的时间信息,而传输到解码器端的另一个轻量级片段特定增强流从视频片段的关键帧中提取和编码。增强流中包含的高频空间信息通过基于变压器的重构网络(TRN)在解码后的LR帧的引导下进一步传输到整个段,从而以较小的比特代价提高了重构质量。此外,我们在梯度反向传播训练中使用虚拟编解码器网络(Virtual Codec Network, VCN),它能够模拟标准编解码器的帧间和帧内编码特性。实验结果表明,与目前基于下采样的增强标准兼容框架相比,该方法具有优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collaborative Task Offloading Based on Scalable DAG in Cell-Free HetMEC Networks Resource Pre-caching Strategy of Digital Twin System Based on Hierarchical MEC Architecture Research on key technologies of audiovisual media microservices and industry applications A Closed-loop Operation and Maintenance Architecture based on Digital Twin for Electric Power Communication Networks Edge Fusion of Intelligent Industrial Park Based on MatrixOne and Pravega
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1