A Reconfigurable Framework for Neural Network-based Video In-loop Filtering

IF 5.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Multimedia Computing Communications and Applications Pub Date : 2024-01-16 DOI:10.1145/3640467
Yichi Zhang, Dandan Ding, Zhan Ma, Zhu Li
{"title":"A Reconfigurable Framework for Neural Network-based Video In-loop Filtering","authors":"Yichi Zhang, Dandan Ding, Zhan Ma, Zhu Li","doi":"10.1145/3640467","DOIUrl":null,"url":null,"abstract":"<p>This paper proposes a reconfigurable framework for neural network-based video in-loop filtering to guide large-scale models for content-aware processing. Specifically, the backbone neural model is decomposed into several convolutional groups and the encoder systematically traverses all candidate configurations combined by these groups to find the best one. The selected configuration index is then encapsulated as side information and passed to the decoder, enabling dynamic model reconfiguration during the decoding stage. The above reconfiguration process is only deployed in the inference stage on top of a pre-trained backbone model. Furthermore, we devise a Wavelet Multi-scale Poolformer (<i>WMSPFormer</i>) as the backbone network structure. <i>WMSPFormer</i> utilizes a wavelet-based multi-scale structure to losslessly decompose the input into multiple scales for spatial-spectral features aggregation. Moreover, it uses the Multi-scale Pooling operations (<i>MSPoolformer</i>) instead of complicated matrix calculations to substitute the attention process. We also extend <i>MSPoolformer</i> to a large-scale version using more parameters, referred to as <i>MSPoolformerExt</i>. Extensive experiments demonstrate that the proposed <i>WMSPFormer+Reconfig.</i> and <i>WMSPFormerExt+Reconfig.</i> achieves a remarkable 7.13% and 7.92% BD-Rate reduction over the anchor H.266/VVC, outperforming most existing methods evaluated under the same training and testing conditions. Also, the low-complexity nature of <i>WMSPFormer</i> series makes it attractive for practical applications.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":"30 19 1","pages":""},"PeriodicalIF":5.2000,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Multimedia Computing Communications and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3640467","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper proposes a reconfigurable framework for neural network-based video in-loop filtering to guide large-scale models for content-aware processing. Specifically, the backbone neural model is decomposed into several convolutional groups and the encoder systematically traverses all candidate configurations combined by these groups to find the best one. The selected configuration index is then encapsulated as side information and passed to the decoder, enabling dynamic model reconfiguration during the decoding stage. The above reconfiguration process is only deployed in the inference stage on top of a pre-trained backbone model. Furthermore, we devise a Wavelet Multi-scale Poolformer (WMSPFormer) as the backbone network structure. WMSPFormer utilizes a wavelet-based multi-scale structure to losslessly decompose the input into multiple scales for spatial-spectral features aggregation. Moreover, it uses the Multi-scale Pooling operations (MSPoolformer) instead of complicated matrix calculations to substitute the attention process. We also extend MSPoolformer to a large-scale version using more parameters, referred to as MSPoolformerExt. Extensive experiments demonstrate that the proposed WMSPFormer+Reconfig. and WMSPFormerExt+Reconfig. achieves a remarkable 7.13% and 7.92% BD-Rate reduction over the anchor H.266/VVC, outperforming most existing methods evaluated under the same training and testing conditions. Also, the low-complexity nature of WMSPFormer series makes it attractive for practical applications.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于神经网络的视频内环过滤可重构框架
本文提出了一种基于神经网络的视频内环滤波可重构框架,以引导大规模模型进行内容感知处理。具体来说,骨干神经模型被分解为多个卷积组,编码器系统地遍历由这些组组合而成的所有候选配置,以找到最佳配置。然后将选定的配置索引封装为侧信息并传递给解码器,从而在解码阶段实现动态模型重新配置。上述重新配置过程仅在推理阶段部署在预先训练好的骨干模型之上。此外,我们还设计了一个小波多尺度池成型器(WMSPFormer)作为骨干网络结构。WMSPFormer 利用基于小波的多尺度结构,将输入无损分解为多个尺度,用于空间光谱特征聚合。此外,它还使用多尺度池化运算(MSPoolformer)代替复杂的矩阵计算,以替代关注过程。广泛的实验证明,与锚点 H.266/VVC 相比,所提出的 WMSPFormer+Reconfig 和 WMSPFormerExt+Reconfig 分别显著降低了 7.13% 和 7.92% 的 BD-Rate,优于在相同训练和测试条件下评估的大多数现有方法。此外,WMSPFormer 系列的低复杂性使其在实际应用中极具吸引力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
8.50
自引率
5.90%
发文量
285
审稿时长
7.5 months
期刊介绍: The ACM Transactions on Multimedia Computing, Communications, and Applications is the flagship publication of the ACM Special Interest Group in Multimedia (SIGMM). It is soliciting paper submissions on all aspects of multimedia. Papers on single media (for instance, audio, video, animation) and their processing are also welcome. TOMM is a peer-reviewed, archival journal, available in both print form and digital form. The Journal is published quarterly; with roughly 7 23-page articles in each issue. In addition, all Special Issues are published online-only to ensure a timely publication. The transactions consists primarily of research papers. This is an archival journal and it is intended that the papers will have lasting importance and value over time. In general, papers whose primary focus is on particular multimedia products or the current state of the industry will not be included.
期刊最新文献
TA-Detector: A GNN-based Anomaly Detector via Trust Relationship KF-VTON: Keypoints-Driven Flow Based Virtual Try-On Network Unified View Empirical Study for Large Pretrained Model on Cross-Domain Few-Shot Learning Multimodal Fusion for Talking Face Generation Utilizing Speech-related Facial Action Units Compressed Point Cloud Quality Index by Combining Global Appearance and Local Details
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1