{"title":"虚拟驾驶环境中点云的三维目标检测","authors":"Bin Xu, Yu Rong, Mingde Zhao","doi":"10.1109/ISPCE-ASIA57917.2022.9970914","DOIUrl":null,"url":null,"abstract":"In autopilot field, 3D object detection is typically done with a complimentary pair of sensors: RGB cameras and LIDARs, either alone or in tandem. Cameras provide rich information in color and texture, while LIDARs focus on geometric and relative distance information. However, the challenge of 3D object detection lies in the difficulty of effectively fusing the 2D camera images with the 3D LIDAR point cloud. In this paper, we propose a two-stage cross-modal fusion panoramic driving perception network for 3D object detection, drivable area segmentation and lane segmentation tasks in parallel and in real time, based on the Carla autopilot dataset. On the one hand, this detector uses a pre-trained semantic segmentation model to decorate the point cloud and complete the drivable area segmentation and lane line segmentation tasks, and then performs the 3D target detection task on the BEV-encoded point cloud. On the other hand, thanks to the novelty data enhancement algorithms and enhanced training strategies designed in this paper, they significantly improve the robustness of the detector. Our detector outperforms existing mainstream 3D object detectors based on pure LIDAR sensors when it comes to detecting tiny targets like pedestrians.","PeriodicalId":197173,"journal":{"name":"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"3D Object Detection for Point Cloud in Virtual Driving Environment\",\"authors\":\"Bin Xu, Yu Rong, Mingde Zhao\",\"doi\":\"10.1109/ISPCE-ASIA57917.2022.9970914\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In autopilot field, 3D object detection is typically done with a complimentary pair of sensors: RGB cameras and LIDARs, either alone or in tandem. Cameras provide rich information in color and texture, while LIDARs focus on geometric and relative distance information. However, the challenge of 3D object detection lies in the difficulty of effectively fusing the 2D camera images with the 3D LIDAR point cloud. In this paper, we propose a two-stage cross-modal fusion panoramic driving perception network for 3D object detection, drivable area segmentation and lane segmentation tasks in parallel and in real time, based on the Carla autopilot dataset. On the one hand, this detector uses a pre-trained semantic segmentation model to decorate the point cloud and complete the drivable area segmentation and lane line segmentation tasks, and then performs the 3D target detection task on the BEV-encoded point cloud. On the other hand, thanks to the novelty data enhancement algorithms and enhanced training strategies designed in this paper, they significantly improve the robustness of the detector. Our detector outperforms existing mainstream 3D object detectors based on pure LIDAR sensors when it comes to detecting tiny targets like pedestrians.\",\"PeriodicalId\":197173,\"journal\":{\"name\":\"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISPCE-ASIA57917.2022.9970914\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPCE-ASIA57917.2022.9970914","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
3D Object Detection for Point Cloud in Virtual Driving Environment
In autopilot field, 3D object detection is typically done with a complimentary pair of sensors: RGB cameras and LIDARs, either alone or in tandem. Cameras provide rich information in color and texture, while LIDARs focus on geometric and relative distance information. However, the challenge of 3D object detection lies in the difficulty of effectively fusing the 2D camera images with the 3D LIDAR point cloud. In this paper, we propose a two-stage cross-modal fusion panoramic driving perception network for 3D object detection, drivable area segmentation and lane segmentation tasks in parallel and in real time, based on the Carla autopilot dataset. On the one hand, this detector uses a pre-trained semantic segmentation model to decorate the point cloud and complete the drivable area segmentation and lane line segmentation tasks, and then performs the 3D target detection task on the BEV-encoded point cloud. On the other hand, thanks to the novelty data enhancement algorithms and enhanced training strategies designed in this paper, they significantly improve the robustness of the detector. Our detector outperforms existing mainstream 3D object detectors based on pure LIDAR sensors when it comes to detecting tiny targets like pedestrians.