Cheng-Hao Wu, Chih-Fan Hsu, Ting-Chun Kuo, C. Griwodz, M. Riegler, Géraldine Morin, Cheng-Hsin Hsu
{"title":"PCC arena: a benchmark platform for point cloud compression algorithms","authors":"Cheng-Hao Wu, Chih-Fan Hsu, Ting-Chun Kuo, C. Griwodz, M. Riegler, Géraldine Morin, Cheng-Hsin Hsu","doi":"10.1145/3386293.3397112","DOIUrl":null,"url":null,"abstract":"Point Cloud Compression (PCC) algorithms can be roughly categorized into: (i) traditional Signal-Processing (SP) based and, more recently, (ii) Machine-Learning (ML) based. PCC algorithms are often evaluated with very different datasets, metrics, and parameters, which in turn makes the evaluation results hard to interpret. In this paper, we propose an open-source benchmark, called PCC Arena, which consists of several point cloud datasets, a suite of performance metrics, and a unified procedure. To demonstrate its practicality, we employ PCC Arena to evaluate three SP-based and one ML-based PCC algorithms. We also conduct a user study to quantify the user experience on rendered objects reconstructed from different PCC algorithms. Several interesting insights are revealed in our evaluations. For example, SP-based PCC algorithms have diverse design objectives and strike different trade-offs between coding efficiency and time complexity. Furthermore, although ML-based PCC algorithms are quite promising, they may suffer from long running time, unscalability to diverse point cloud densities, and high engineering complexity. Nonetheless, ML-based PCC algorithms are worth of more in-depth studies, and PCC Arena will play a critical role in the follow-up research for more interpretable and comparable evaluation results.","PeriodicalId":246411,"journal":{"name":"Proceedings of the 12th ACM International Workshop on Immersive Mixed and Virtual Environment Systems","volume":"43 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th ACM International Workshop on Immersive Mixed and Virtual Environment Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3386293.3397112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Point Cloud Compression (PCC) algorithms can be roughly categorized into: (i) traditional Signal-Processing (SP) based and, more recently, (ii) Machine-Learning (ML) based. PCC algorithms are often evaluated with very different datasets, metrics, and parameters, which in turn makes the evaluation results hard to interpret. In this paper, we propose an open-source benchmark, called PCC Arena, which consists of several point cloud datasets, a suite of performance metrics, and a unified procedure. To demonstrate its practicality, we employ PCC Arena to evaluate three SP-based and one ML-based PCC algorithms. We also conduct a user study to quantify the user experience on rendered objects reconstructed from different PCC algorithms. Several interesting insights are revealed in our evaluations. For example, SP-based PCC algorithms have diverse design objectives and strike different trade-offs between coding efficiency and time complexity. Furthermore, although ML-based PCC algorithms are quite promising, they may suffer from long running time, unscalability to diverse point cloud densities, and high engineering complexity. Nonetheless, ML-based PCC algorithms are worth of more in-depth studies, and PCC Arena will play a critical role in the follow-up research for more interpretable and comparable evaluation results.