Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot
{"title":"移动平台上激光雷达点投影到相机图像上的不确定性估计","authors":"Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot","doi":"10.1109/ICRA.2019.8794424","DOIUrl":null,"url":null,"abstract":"Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"21 1","pages":"6637-6643"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Uncertainty Estimation for Projecting Lidar Points onto Camera Images for Moving Platforms\",\"authors\":\"Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot\",\"doi\":\"10.1109/ICRA.2019.8794424\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.\",\"PeriodicalId\":6730,\"journal\":{\"name\":\"2019 International Conference on Robotics and Automation (ICRA)\",\"volume\":\"21 1\",\"pages\":\"6637-6643\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA.2019.8794424\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA.2019.8794424","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Uncertainty Estimation for Projecting Lidar Points onto Camera Images for Moving Platforms
Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.