虚拟驾驶环境中点云的三维目标检测

Bin Xu, Yu Rong, Mingde Zhao
{"title":"虚拟驾驶环境中点云的三维目标检测","authors":"Bin Xu, Yu Rong, Mingde Zhao","doi":"10.1109/ISPCE-ASIA57917.2022.9970914","DOIUrl":null,"url":null,"abstract":"In autopilot field, 3D object detection is typically done with a complimentary pair of sensors: RGB cameras and LIDARs, either alone or in tandem. Cameras provide rich information in color and texture, while LIDARs focus on geometric and relative distance information. However, the challenge of 3D object detection lies in the difficulty of effectively fusing the 2D camera images with the 3D LIDAR point cloud. In this paper, we propose a two-stage cross-modal fusion panoramic driving perception network for 3D object detection, drivable area segmentation and lane segmentation tasks in parallel and in real time, based on the Carla autopilot dataset. On the one hand, this detector uses a pre-trained semantic segmentation model to decorate the point cloud and complete the drivable area segmentation and lane line segmentation tasks, and then performs the 3D target detection task on the BEV-encoded point cloud. On the other hand, thanks to the novelty data enhancement algorithms and enhanced training strategies designed in this paper, they significantly improve the robustness of the detector. Our detector outperforms existing mainstream 3D object detectors based on pure LIDAR sensors when it comes to detecting tiny targets like pedestrians.","PeriodicalId":197173,"journal":{"name":"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"3D Object Detection for Point Cloud in Virtual Driving Environment\",\"authors\":\"Bin Xu, Yu Rong, Mingde Zhao\",\"doi\":\"10.1109/ISPCE-ASIA57917.2022.9970914\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In autopilot field, 3D object detection is typically done with a complimentary pair of sensors: RGB cameras and LIDARs, either alone or in tandem. Cameras provide rich information in color and texture, while LIDARs focus on geometric and relative distance information. However, the challenge of 3D object detection lies in the difficulty of effectively fusing the 2D camera images with the 3D LIDAR point cloud. In this paper, we propose a two-stage cross-modal fusion panoramic driving perception network for 3D object detection, drivable area segmentation and lane segmentation tasks in parallel and in real time, based on the Carla autopilot dataset. On the one hand, this detector uses a pre-trained semantic segmentation model to decorate the point cloud and complete the drivable area segmentation and lane line segmentation tasks, and then performs the 3D target detection task on the BEV-encoded point cloud. On the other hand, thanks to the novelty data enhancement algorithms and enhanced training strategies designed in this paper, they significantly improve the robustness of the detector. Our detector outperforms existing mainstream 3D object detectors based on pure LIDAR sensors when it comes to detecting tiny targets like pedestrians.\",\"PeriodicalId\":197173,\"journal\":{\"name\":\"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISPCE-ASIA57917.2022.9970914\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Product Compliance Engineering - Asia (ISPCE-ASIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPCE-ASIA57917.2022.9970914","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在自动驾驶领域,3D目标检测通常是通过一对互补的传感器完成的:RGB相机和激光雷达,可以单独使用,也可以串联使用。摄像头提供丰富的颜色和纹理信息,而激光雷达则专注于几何和相对距离信息。然而,三维目标检测的挑战在于难以有效地将二维相机图像与三维激光雷达点云融合。本文提出了一种基于Carla自动驾驶数据集的两阶段跨模态融合全景驾驶感知网络,用于并行实时地完成3D目标检测、可驾驶区域分割和车道分割任务。该检测器一方面利用预先训练好的语义分割模型对点云进行装饰,完成可行驶区域分割和车道线分割任务,然后在bev编码的点云上执行三维目标检测任务。另一方面,由于本文设计的新颖性数据增强算法和增强训练策略,显著提高了检测器的鲁棒性。在检测行人等微小目标时,我们的探测器优于现有的基于纯激光雷达传感器的主流3D物体探测器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
3D Object Detection for Point Cloud in Virtual Driving Environment
In autopilot field, 3D object detection is typically done with a complimentary pair of sensors: RGB cameras and LIDARs, either alone or in tandem. Cameras provide rich information in color and texture, while LIDARs focus on geometric and relative distance information. However, the challenge of 3D object detection lies in the difficulty of effectively fusing the 2D camera images with the 3D LIDAR point cloud. In this paper, we propose a two-stage cross-modal fusion panoramic driving perception network for 3D object detection, drivable area segmentation and lane segmentation tasks in parallel and in real time, based on the Carla autopilot dataset. On the one hand, this detector uses a pre-trained semantic segmentation model to decorate the point cloud and complete the drivable area segmentation and lane line segmentation tasks, and then performs the 3D target detection task on the BEV-encoded point cloud. On the other hand, thanks to the novelty data enhancement algorithms and enhanced training strategies designed in this paper, they significantly improve the robustness of the detector. Our detector outperforms existing mainstream 3D object detectors based on pure LIDAR sensors when it comes to detecting tiny targets like pedestrians.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
ECG Dynamical System Identification Based on Multi-scale Wavelet Neural Networks A 8pW Noise Interference-Free Dual-Output Voltage Reference for Implantable Medical Devices Moving Average-Based Performance Enhancement of Sample Convolution and Interactive Learning for Short-Term Load Forecasting Condition Number-based Evolving ESN ALGANs: Enhancing membership inference attacks in federated learning with GANs and active learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1