Occupancy-Assisted Attribute Artifact Reduction for Video-Based Point Cloud Compression

IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Broadcasting Pub Date : 2024-01-30 DOI:10.1109/TBC.2024.3353568
Linyao Gao;Zhu Li;Lizhi Hou;Yiling Xu;Jun Sun
{"title":"Occupancy-Assisted Attribute Artifact Reduction for Video-Based Point Cloud Compression","authors":"Linyao Gao;Zhu Li;Lizhi Hou;Yiling Xu;Jun Sun","doi":"10.1109/TBC.2024.3353568","DOIUrl":null,"url":null,"abstract":"Video-based point cloud compression (V-PCC) has achieved remarkable compression efficiency, which converts point clouds into videos and leverages video codecs for coding. For lossy compression, the undesirable artifacts of attribute images always degrade the point clouds attribute reconstruction quality. In this paper, we propose an Occupancy-assisted Compression Artifact Removal Network (OCARNet) to remove the distortions of V-PCC decoded attribute images for high-quality point cloud attribute reconstruction. Specifically, the occupancy information is fed into network as a prior knowledge to provide more spatial and structural information and to assist in eliminating the distortions of the texture regions. To aggregate the occupancy information effectively, we design a multi-level feature fusion framework with Channel-Spatial Attention based Residual Blocks (CSARB), where the short and long residual connections are jointly employed to capture the local context and long-range dependency. Besides, we propose a Masked Mean Square Error (MMSE) loss function based on the occupancy information to train our proposed network to focus on estimating the attribute artifacts of the occupied regions. To the best of our knowledge, this is the first learning-based attribute artifact removal method for V-PCC. Experimental results demonstrate that our framework outperforms existing state-of-the-art methods and shows the effectiveness on both objective and subjective quality comparisons.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"667-680"},"PeriodicalIF":3.2000,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Broadcasting","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10416804/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Video-based point cloud compression (V-PCC) has achieved remarkable compression efficiency, which converts point clouds into videos and leverages video codecs for coding. For lossy compression, the undesirable artifacts of attribute images always degrade the point clouds attribute reconstruction quality. In this paper, we propose an Occupancy-assisted Compression Artifact Removal Network (OCARNet) to remove the distortions of V-PCC decoded attribute images for high-quality point cloud attribute reconstruction. Specifically, the occupancy information is fed into network as a prior knowledge to provide more spatial and structural information and to assist in eliminating the distortions of the texture regions. To aggregate the occupancy information effectively, we design a multi-level feature fusion framework with Channel-Spatial Attention based Residual Blocks (CSARB), where the short and long residual connections are jointly employed to capture the local context and long-range dependency. Besides, we propose a Masked Mean Square Error (MMSE) loss function based on the occupancy information to train our proposed network to focus on estimating the attribute artifacts of the occupied regions. To the best of our knowledge, this is the first learning-based attribute artifact removal method for V-PCC. Experimental results demonstrate that our framework outperforms existing state-of-the-art methods and shows the effectiveness on both objective and subjective quality comparisons.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在基于视频的点云压缩中减少占用辅助属性伪影
基于视频的点云压缩(V-PCC)将点云转换为视频,并利用视频编解码器进行编码,从而实现了显著的压缩效率。对于有损压缩,属性图像的不良伪影总是会降低点云属性重建的质量。本文提出了一种占位辅助压缩伪影去除网络(OCARNet)来去除 V-PCC 解码属性图像的失真,从而实现高质量的点云属性重建。具体来说,将占位信息作为先验知识输入网络,以提供更多的空间和结构信息,并帮助消除纹理区域的失真。为了有效地聚合占位信息,我们设计了一种基于通道-空间注意力残差块(CSARB)的多层次特征融合框架,其中长短残差连接被联合使用,以捕捉局部上下文和长程依赖性。此外,我们还提出了一种基于占用信息的掩蔽均方误差(MMSE)损失函数,用于训练我们提出的网络,以集中估计占用区域的属性假象。据我们所知,这是第一种基于学习的 V-PCC 属性伪影去除方法。实验结果表明,我们的框架优于现有的最先进方法,并在客观和主观质量比较中显示出其有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Broadcasting
IEEE Transactions on Broadcasting 工程技术-电信学
CiteScore
9.40
自引率
31.10%
发文量
79
审稿时长
6-12 weeks
期刊介绍: The Society’s Field of Interest is “Devices, equipment, techniques and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.” In addition to this formal FOI statement, which is used to provide guidance to the Publications Committee in the selection of content, the AdCom has further resolved that “broadcast systems includes all aspects of transmission, propagation, and reception.”
期刊最新文献
Front Cover Table of Contents Table of Contents IEEE Transactions on Broadcasting Information for Authors IEEE Transactions on Broadcasting Information for Authors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1