NVRC:神经视频表示压缩

Ho Man Kwan, Ge Gao, Fan Zhang, Andrew Gower, David Bull
{"title":"NVRC:神经视频表示压缩","authors":"Ho Man Kwan, Ge Gao, Fan Zhang, Andrew Gower, David Bull","doi":"arxiv-2409.07414","DOIUrl":null,"url":null,"abstract":"Recent advances in implicit neural representation (INR)-based video coding\nhave demonstrated its potential to compete with both conventional and other\nlearning-based approaches. With INR methods, a neural network is trained to\noverfit a video sequence, with its parameters compressed to obtain a compact\nrepresentation of the video content. However, although promising results have\nbeen achieved, the best INR-based methods are still out-performed by the latest\nstandard codecs, such as VVC VTM, partially due to the simple model compression\ntechniques employed. In this paper, rather than focusing on representation\narchitectures as in many existing works, we propose a novel INR-based video\ncompression framework, Neural Video Representation Compression (NVRC),\ntargeting compression of the representation. Based on the novel entropy coding\nand quantization models proposed, NVRC, for the first time, is able to optimize\nan INR-based video codec in a fully end-to-end manner. To further minimize the\nadditional bitrate overhead introduced by the entropy models, we have also\nproposed a new model compression framework for coding all the network,\nquantization and entropy model parameters hierarchically. Our experiments show\nthat NVRC outperforms many conventional and learning-based benchmark codecs,\nwith a 24% average coding gain over VVC VTM (Random Access) on the UVG dataset,\nmeasured in PSNR. As far as we are aware, this is the first time an INR-based\nvideo codec achieving such performance. The implementation of NVRC will be\nreleased at www.github.com.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"NVRC: Neural Video Representation Compression\",\"authors\":\"Ho Man Kwan, Ge Gao, Fan Zhang, Andrew Gower, David Bull\",\"doi\":\"arxiv-2409.07414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in implicit neural representation (INR)-based video coding\\nhave demonstrated its potential to compete with both conventional and other\\nlearning-based approaches. With INR methods, a neural network is trained to\\noverfit a video sequence, with its parameters compressed to obtain a compact\\nrepresentation of the video content. However, although promising results have\\nbeen achieved, the best INR-based methods are still out-performed by the latest\\nstandard codecs, such as VVC VTM, partially due to the simple model compression\\ntechniques employed. In this paper, rather than focusing on representation\\narchitectures as in many existing works, we propose a novel INR-based video\\ncompression framework, Neural Video Representation Compression (NVRC),\\ntargeting compression of the representation. Based on the novel entropy coding\\nand quantization models proposed, NVRC, for the first time, is able to optimize\\nan INR-based video codec in a fully end-to-end manner. To further minimize the\\nadditional bitrate overhead introduced by the entropy models, we have also\\nproposed a new model compression framework for coding all the network,\\nquantization and entropy model parameters hierarchically. Our experiments show\\nthat NVRC outperforms many conventional and learning-based benchmark codecs,\\nwith a 24% average coding gain over VVC VTM (Random Access) on the UVG dataset,\\nmeasured in PSNR. As far as we are aware, this is the first time an INR-based\\nvideo codec achieving such performance. The implementation of NVRC will be\\nreleased at www.github.com.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07414\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07414","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于隐式神经表示(INR)的视频编码技术的最新进展表明,它具有与传统方法和其他基于学习的方法相抗衡的潜力。通过 INR 方法,训练神经网络以适应视频序列,并压缩其参数以获得视频内容的紧凑表示。然而,尽管已经取得了可喜的成果,但基于 INR 的最佳方法仍然比不上最新的标准编解码器,如 VVC VTM,部分原因是采用了简单的模型压缩技术。在本文中,我们没有像许多现有著作那样专注于表示架构,而是提出了一种新颖的基于 INR 的视频压缩框架--神经视频表示压缩(NVRC),其目标是压缩表示。基于所提出的新型熵编码和量化模型,NVRC 首次能够以完全端到端的方式优化基于 INR 的视频编解码器。为了进一步减少熵模型带来的额外比特率开销,我们还提出了一种新的模型压缩框架,对所有网络、量化和熵模型参数进行分层编码。我们的实验表明,在 UVG 数据集上,NVRC 的性能优于许多传统的和基于学习的基准编解码器,与 VVC VTM(随机存取)相比,NVRC 的平均编码增益为 24%(以 PSNR 衡量)。据我们所知,这是基于 INR 的视频编解码器首次达到这样的性能。NVRC 的实现将在 www.github.com 上发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
NVRC: Neural Video Representation Compression
Recent advances in implicit neural representation (INR)-based video coding have demonstrated its potential to compete with both conventional and other learning-based approaches. With INR methods, a neural network is trained to overfit a video sequence, with its parameters compressed to obtain a compact representation of the video content. However, although promising results have been achieved, the best INR-based methods are still out-performed by the latest standard codecs, such as VVC VTM, partially due to the simple model compression techniques employed. In this paper, rather than focusing on representation architectures as in many existing works, we propose a novel INR-based video compression framework, Neural Video Representation Compression (NVRC), targeting compression of the representation. Based on the novel entropy coding and quantization models proposed, NVRC, for the first time, is able to optimize an INR-based video codec in a fully end-to-end manner. To further minimize the additional bitrate overhead introduced by the entropy models, we have also proposed a new model compression framework for coding all the network, quantization and entropy model parameters hierarchically. Our experiments show that NVRC outperforms many conventional and learning-based benchmark codecs, with a 24% average coding gain over VVC VTM (Random Access) on the UVG dataset, measured in PSNR. As far as we are aware, this is the first time an INR-based video codec achieving such performance. The implementation of NVRC will be released at www.github.com.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT Denoising diffusion models for high-resolution microscopy image restoration Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1