Sensor fusion of camera and LiDAR raw data for vehicle detection

Gokulesh Danapal, Giovanni A. Santos, J. P. J. D. Da Costa, B. Praciano, Gabriel P. M. Pinheiro
{"title":"Sensor fusion of camera and LiDAR raw data for vehicle detection","authors":"Gokulesh Danapal, Giovanni A. Santos, J. P. J. D. Da Costa, B. Praciano, Gabriel P. M. Pinheiro","doi":"10.1109/WCNPS50723.2020.9263724","DOIUrl":null,"url":null,"abstract":"Autonomous vehicles are expected to save almost half-million lives between 2035 to 2045. Moreover, since 90% of the accidents are caused by humans, 9% by weather and road conditions, and only 1% by vehicular failures, autonomous vehicles will provide much safer traffic, drastically decreasing the number of accidents. To perceive the surrounding objects and environment, autonomous vehicles depend on their sensor systems such as cameras, LiDARs, radars, and sonars. Traditionally, decision fusion is performed, implying into first individually processing each sensor’s data and then combining the processed information of the different sensors. In contrast to the traditional decision fusion of the processed information from each sensor, the raw data fusion extracts information from all sensors’ raw data providing higher reliability and accuracy in terms of object and environment perception. This paper proposes an improved sensor fusion framework based on You Only Look Once (YOLO) that jointly processes the raw data from cameras and LiDARs. To validate our framework, the dataset of the Karlsruhe Institute of Technology (KITTI) in partnership with Toyota Technical University generated using two cameras, and a Velodyne laser scanner is considered. The proposed raw data fusion framework outperforms the traditional decision fusion framework with a gain of 5% in terms of vehicle detection performance.","PeriodicalId":385668,"journal":{"name":"2020 Workshop on Communication Networks and Power Systems (WCNPS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Workshop on Communication Networks and Power Systems (WCNPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCNPS50723.2020.9263724","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Autonomous vehicles are expected to save almost half-million lives between 2035 to 2045. Moreover, since 90% of the accidents are caused by humans, 9% by weather and road conditions, and only 1% by vehicular failures, autonomous vehicles will provide much safer traffic, drastically decreasing the number of accidents. To perceive the surrounding objects and environment, autonomous vehicles depend on their sensor systems such as cameras, LiDARs, radars, and sonars. Traditionally, decision fusion is performed, implying into first individually processing each sensor’s data and then combining the processed information of the different sensors. In contrast to the traditional decision fusion of the processed information from each sensor, the raw data fusion extracts information from all sensors’ raw data providing higher reliability and accuracy in terms of object and environment perception. This paper proposes an improved sensor fusion framework based on You Only Look Once (YOLO) that jointly processes the raw data from cameras and LiDARs. To validate our framework, the dataset of the Karlsruhe Institute of Technology (KITTI) in partnership with Toyota Technical University generated using two cameras, and a Velodyne laser scanner is considered. The proposed raw data fusion framework outperforms the traditional decision fusion framework with a gain of 5% in terms of vehicle detection performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
融合摄像头和激光雷达原始数据的车辆检测
预计在2035年至2045年间,自动驾驶汽车将拯救近50万人的生命。此外,由于90%的事故是由人类造成的,9%是由天气和道路状况造成的,只有1%是由车辆故障造成的,因此自动驾驶汽车将提供更安全的交通,大大减少事故数量。为了感知周围的物体和环境,自动驾驶汽车依赖于它们的传感器系统,如摄像头、激光雷达、雷达和声纳。传统的决策融合是先单独处理每个传感器的数据,然后将不同传感器的处理信息结合起来。与传统的每个传感器处理信息的决策融合相比,原始数据融合从所有传感器的原始数据中提取信息,在物体和环境感知方面提供更高的可靠性和准确性。本文提出了一种改进的基于YOLO (You Only Look Once)的传感器融合框架,该框架联合处理来自相机和激光雷达的原始数据。为了验证我们的框架,卡尔斯鲁厄理工学院(KITTI)与丰田技术大学合作的数据集使用两台摄像机和一台Velodyne激光扫描仪生成。提出的原始数据融合框架在车辆检测性能方面优于传统决策融合框架,提高了5%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Analysis of Traveling Waves Propagation Characteristics Considering Different Transmission Line EMTP Models A SHA-3 Co-Processor for IoT Applications Management of an Electrical Storage System for Joint Energy Arbitrage and Improvement of Voltage Profile IoT Fog-based Image Matching Monitoring System for Physical Access Control through IP Camera Devices Segmentation and Entropy Coding Analysis of a Data Compression System for Power Quality Disturbances
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1