移动平台上激光雷达点投影到相机图像上的不确定性估计

Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot
{"title":"移动平台上激光雷达点投影到相机图像上的不确定性估计","authors":"Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot","doi":"10.1109/ICRA.2019.8794424","DOIUrl":null,"url":null,"abstract":"Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"21 1","pages":"6637-6643"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Uncertainty Estimation for Projecting Lidar Points onto Camera Images for Moving Platforms\",\"authors\":\"Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot\",\"doi\":\"10.1109/ICRA.2019.8794424\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.\",\"PeriodicalId\":6730,\"journal\":{\"name\":\"2019 International Conference on Robotics and Automation (ICRA)\",\"volume\":\"21 1\",\"pages\":\"6637-6643\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA.2019.8794424\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA.2019.8794424","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

结合多个传感器进行高级感知是自动驾驶汽车导航的关键要求。异构传感器用于获取有关周围环境的丰富信息。摄像头和激光雷达传感器的结合可以将精确的距离信息投射到视觉图像数据上。这提供了对场景的高层次理解,可用于启用基于上下文的算法,如避免碰撞和导航。组合这些传感器的主要挑战是将数据对齐到一个公共域。由于相机的内在校准误差,相机和激光雷达之间的外在校准误差以及平台运动引起的误差,这可能很困难。在本文中,我们研究了为扫描激光雷达传感器提供运动校正所需的算法。将激光雷达测量结果投影到一致的里程计框架中所产生的误差是不可能完全消除的,因此,在组合两个不同的传感器框架时,必须考虑该投影的不确定性。这项工作提出了一个新的框架,用于预测激光雷达测量(3D)投影到移动平台的图像框架(2D)中的不确定性。该方法将运动校正的不确定性与内外定误差引起的不确定性融合在一起。通过结合投影误差的主要分量,可以更好地表示估计过程的不确定性。我们的运动校正算法和所提出的扩展不确定性模型的实验结果,使用在配备了覆盖180度视场的广角摄像头和16束扫描激光雷达的电动汽车上收集的真实数据进行了演示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Uncertainty Estimation for Projecting Lidar Points onto Camera Images for Moving Platforms
Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improving collective decision accuracy via time-varying cross-inhibition Design of a Modular Continuum Robot Segment for use in a General Purpose Manipulator* Adaptive H∞ Controller for Precise Manoeuvring of a Space Robot Laparoscopy instrument tracking for single view camera and skill assessment Event-based, Direct Camera Tracking from a Photometric 3D Map using Nonlinear Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1