{"title":"Dataset Creation for Semantic Segmentation Using Colored Point Clouds Considering Shadows on Traversable Area","authors":"Marin Wada, Yuriko Ueda, Junya Morioka, Miho Adachi, Ryusuke Miyamoto","doi":"10.20965/jrm.2023.p1406","DOIUrl":null,"url":null,"abstract":"Semantic segmentation, which provides pixel-wise class labels for an input image, is expected to improve the movement performance of autonomous robots significantly. However, it is difficult to train a good classifier for target applications; public large-scale datasets are often unsuitable. Actually, a classifier trained using Cityscapes is not enough accurate for the Tsukuba Challenge. To generate an appropriate dataset for the target environment, we attempt to construct a semi-automatic method using a colored point cloud obtained with a 3D scanner. Although some degree of accuracy is achieved, it is not practical. Hence, we propose a novel method that creates images with shadows by rendering them in the 3D space to improve the classification accuracy of actual images with shadows, for which existing methods do not output appropriate results. Experimental results using datasets captured around the Tsukuba City Hall demonstrate that the proposed method was superior when appropriate constraints were applied for shadow generation; the mIoU was improved from 0.358 to 0.491 when testing images were obtained at different locations.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":"12 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Robotics and Mechatronics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20965/jrm.2023.p1406","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Semantic segmentation, which provides pixel-wise class labels for an input image, is expected to improve the movement performance of autonomous robots significantly. However, it is difficult to train a good classifier for target applications; public large-scale datasets are often unsuitable. Actually, a classifier trained using Cityscapes is not enough accurate for the Tsukuba Challenge. To generate an appropriate dataset for the target environment, we attempt to construct a semi-automatic method using a colored point cloud obtained with a 3D scanner. Although some degree of accuracy is achieved, it is not practical. Hence, we propose a novel method that creates images with shadows by rendering them in the 3D space to improve the classification accuracy of actual images with shadows, for which existing methods do not output appropriate results. Experimental results using datasets captured around the Tsukuba City Hall demonstrate that the proposed method was superior when appropriate constraints were applied for shadow generation; the mIoU was improved from 0.358 to 0.491 when testing images were obtained at different locations.
期刊介绍:
First published in 1989, the Journal of Robotics and Mechatronics (JRM) has the longest publication history in the world in this field, publishing a total of over 2,000 works exclusively on robotics and mechatronics from the first number. The Journal publishes academic papers, development reports, reviews, letters, notes, and discussions. The JRM is a peer-reviewed journal in fields such as robotics, mechatronics, automation, and system integration. Its editorial board includes wellestablished researchers and engineers in the field from the world over. The scope of the journal includes any and all topics on robotics and mechatronics. As a key technology in robotics and mechatronics, it includes actuator design, motion control, sensor design, sensor fusion, sensor networks, robot vision, audition, mechanism design, robot kinematics and dynamics, mobile robot, path planning, navigation, SLAM, robot hand, manipulator, nano/micro robot, humanoid, service and home robots, universal design, middleware, human-robot interaction, human interface, networked robotics, telerobotics, ubiquitous robot, learning, and intelligence. The scope also includes applications of robotics and automation, and system integrations in the fields of manufacturing, construction, underwater, space, agriculture, sustainability, energy conservation, ecology, rescue, hazardous environments, safety and security, dependability, medical, and welfare.