A Deep Transformer-Based Fast CU Partition Approach for Inter-Mode VVC

Tianyi Li;Mai Xu;Zheng Liu;Ying Chen;Kai Li
{"title":"A Deep Transformer-Based Fast CU Partition Approach for Inter-Mode VVC","authors":"Tianyi Li;Mai Xu;Zheng Liu;Ying Chen;Kai Li","doi":"10.1109/TIP.2025.3533204","DOIUrl":null,"url":null,"abstract":"The latest versatile video coding (VVC) standard proposed by the Joint Video Exploration Team (JVET) has significantly improved coding efficiency compared to that of its predecessor, while introducing an extremely higher computational complexity by <inline-formula> <tex-math>$6\\sim 26$ </tex-math></inline-formula> times. The quad-tree plus multi-type tree (QTMT)-based coding unit (CU) partition accounts for most of the encoding time in VVC encoding. This paper proposes a data-driven fast CU partition approach based on an efficient Transformer model to accelerate VVC inter-coding. First, we establish a large-scale database for inter-mode VVC, comprising diverse CU partition patterns from more than 800 raw video sequences across various resolutions and contents. Next, we propose a deep neural network model with a Transformer-based temporal topology for predicting the CU partition, named as TCP-Net, which is adaptive to the group of pictures (GOP) hierarchy in VVC. Then, we design a two-stage structured output for TCP-Net, reflecting both the locations of CU edges and the split modes of all possible CUs. Accordingly, we develop a dual-supervised optimization mechanism to train the TCP-Net model with improved accuracy. The experimental results have verified that our approach can reduce the encoding time by <inline-formula> <tex-math>$46.89\\sim 55.91$ </tex-math></inline-formula>% with negligible rate-distortion (RD) degradation, outperforming other state-of-the-art approaches.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1133-1148"},"PeriodicalIF":13.7000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10857954/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The latest versatile video coding (VVC) standard proposed by the Joint Video Exploration Team (JVET) has significantly improved coding efficiency compared to that of its predecessor, while introducing an extremely higher computational complexity by $6\sim 26$ times. The quad-tree plus multi-type tree (QTMT)-based coding unit (CU) partition accounts for most of the encoding time in VVC encoding. This paper proposes a data-driven fast CU partition approach based on an efficient Transformer model to accelerate VVC inter-coding. First, we establish a large-scale database for inter-mode VVC, comprising diverse CU partition patterns from more than 800 raw video sequences across various resolutions and contents. Next, we propose a deep neural network model with a Transformer-based temporal topology for predicting the CU partition, named as TCP-Net, which is adaptive to the group of pictures (GOP) hierarchy in VVC. Then, we design a two-stage structured output for TCP-Net, reflecting both the locations of CU edges and the split modes of all possible CUs. Accordingly, we develop a dual-supervised optimization mechanism to train the TCP-Net model with improved accuracy. The experimental results have verified that our approach can reduce the encoding time by $46.89\sim 55.91$ % with negligible rate-distortion (RD) degradation, outperforming other state-of-the-art approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度变压器的跨模VVC快速CU划分方法
由联合视频探索小组(JVET)提出的最新多功能视频编码(VVC)标准与之前的标准相比,显著提高了编码效率,同时引入了极高的计算复杂度,提高了6倍。在VVC编码中,基于四叉树加多类型树(QTMT)的编码单元(CU)划分占据了大部分的编码时间。本文提出了一种基于高效Transformer模型的数据驱动快速CU划分方法,以加速VVC间编码。首先,我们建立了一个大规模的模式间VVC数据库,包括来自800多个不同分辨率和内容的原始视频序列的不同CU分区模式。接下来,我们提出了一种深度神经网络模型,该模型具有基于transformer的时态拓扑,用于预测CU分区,称为TCP-Net,该模型自适应VVC中的图像组(GOP)层次结构。然后,我们为TCP-Net设计了一个两阶段的结构化输出,反映了CU边缘的位置和所有可能CU的分裂模式。因此,我们开发了一种双监督优化机制,以提高TCP-Net模型的准确性。实验结果证明,我们的方法可以将编码时间减少46.89 %,且可以忽略率失真(RD)退化,优于其他最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
FreeStyle: Towards Style-Inclusive Sketch-based Person Retrieval. Learning Retinex Prior for Compressive Hyperspectral Image Reconstruction. Multi-Resolution Alignment for Voxel Sparsity in Camera-Based 3D Semantic Scene Completion. Double Nonconvex Tensor Robust Kernel Principal Component Analysis and Its Visual Applications. Anatomy-aware MR-imaging-only Radiotherapy.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1