Hafeez Husain Cholakkal, S. Mentasti, M. Bersani, S. Arrigoni, M. Matteucci, F. Cheli
{"title":"LiDAR - Stereo Camera Fusion for Accurate Depth Estimation","authors":"Hafeez Husain Cholakkal, S. Mentasti, M. Bersani, S. Arrigoni, M. Matteucci, F. Cheli","doi":"10.23919/AEITAUTOMOTIVE50086.2020.9307398","DOIUrl":null,"url":null,"abstract":"Dense 3D reconstruction of the surrounding environment is one the fundamental way of perception for Advanced Driver-Assistance Systems (ADAS). In this field, accurate 3D modeling finds applications in many areas like obstacle detection, object tracking, and remote driving. This task can be performed with different sensors like cameras, LiDARs, and radars. Each one presents some advantages and disadvantages based on the precision of the depth, the sensor cost, and the accuracy in adverse weather conditions. For this reason, many researchers have explored the fusion of multiple sources to overcome each sensor limit and provide an accurate representation of the vehicle’s surroundings. This paper proposes a novel post-processing method for accurate depth estimation, based on a patch-wise depth correction approach, to fuse data from LiDAR and stereo camera. This solution allows for accurate edges and object boundaries preservation in multiple challenging scenarios.","PeriodicalId":104806,"journal":{"name":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/AEITAUTOMOTIVE50086.2020.9307398","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Dense 3D reconstruction of the surrounding environment is one the fundamental way of perception for Advanced Driver-Assistance Systems (ADAS). In this field, accurate 3D modeling finds applications in many areas like obstacle detection, object tracking, and remote driving. This task can be performed with different sensors like cameras, LiDARs, and radars. Each one presents some advantages and disadvantages based on the precision of the depth, the sensor cost, and the accuracy in adverse weather conditions. For this reason, many researchers have explored the fusion of multiple sources to overcome each sensor limit and provide an accurate representation of the vehicle’s surroundings. This paper proposes a novel post-processing method for accurate depth estimation, based on a patch-wise depth correction approach, to fuse data from LiDAR and stereo camera. This solution allows for accurate edges and object boundaries preservation in multiple challenging scenarios.