{"title":"Weakly supervised point cloud semantic segmentation based on scene consistency","authors":"Yingchun Niu, Jianqin Yin, Chao Qi, Liang Geng","doi":"10.1007/s10489-024-05822-2","DOIUrl":null,"url":null,"abstract":"<div><p>Weakly supervised point cloud segmentation has garnered considerable interest recently, primarily due to its ability to diminish labor-intensive manual labeling costs. The effectiveness of such methods hinges on their ability to augment the supervision signals available for training implicitly. However, we found that most approaches tend to be implemented through complex modeling, which is not conducive to deployment and implementation in resource-poor scenarios. Our study introduces a novel scene consistency modeling approach that significantly enhances weakly supervised point cloud segmentation in this context. By synergistically modeling both complete and incomplete scenes, our method can improve the quality of the supervision signal and save more resources and ease of deployment in practical applications. To achieve this, we first generate the corresponding incomplete scene for the whole scene using windowing techniques. Next, we input the complete and incomplete scenes into a network encoder and obtain prediction results for each scene through two decoders. We enforce semantic consistency between the labeled and unlabeled data in the two scenes by employing cross-entropy and KL loss. This consistent modeling method enables the network to focus more on the same areas in both scenes, capturing local details and effectively increasing the supervision signals. One of the advantages of the proposed method is its simplicity and cost-effectiveness. Because we rely solely on variance and KL loss to model scene consistency, resulting in straightforward computations. Our experimental evaluations on S3DIS, ScanNet, and Semantic3D datasets provide further evidence that our method can effectively leverage sparsely labeled data and abundant unlabeled data to enhance supervision signals and improve the overall model performance.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"54 23","pages":"12439 - 12452"},"PeriodicalIF":3.4000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-05822-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Weakly supervised point cloud segmentation has garnered considerable interest recently, primarily due to its ability to diminish labor-intensive manual labeling costs. The effectiveness of such methods hinges on their ability to augment the supervision signals available for training implicitly. However, we found that most approaches tend to be implemented through complex modeling, which is not conducive to deployment and implementation in resource-poor scenarios. Our study introduces a novel scene consistency modeling approach that significantly enhances weakly supervised point cloud segmentation in this context. By synergistically modeling both complete and incomplete scenes, our method can improve the quality of the supervision signal and save more resources and ease of deployment in practical applications. To achieve this, we first generate the corresponding incomplete scene for the whole scene using windowing techniques. Next, we input the complete and incomplete scenes into a network encoder and obtain prediction results for each scene through two decoders. We enforce semantic consistency between the labeled and unlabeled data in the two scenes by employing cross-entropy and KL loss. This consistent modeling method enables the network to focus more on the same areas in both scenes, capturing local details and effectively increasing the supervision signals. One of the advantages of the proposed method is its simplicity and cost-effectiveness. Because we rely solely on variance and KL loss to model scene consistency, resulting in straightforward computations. Our experimental evaluations on S3DIS, ScanNet, and Semantic3D datasets provide further evidence that our method can effectively leverage sparsely labeled data and abundant unlabeled data to enhance supervision signals and improve the overall model performance.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.