使用三维平面配准的单帧激光雷达相机校准

Ashutosh Singandhupe, Hung M. La, Q. Ha
{"title":"使用三维平面配准的单帧激光雷达相机校准","authors":"Ashutosh Singandhupe, Hung M. La, Q. Ha","doi":"10.1109/IRC55401.2022.00076","DOIUrl":null,"url":null,"abstract":"This work focuses on finding the extrinsic parameters (rotation and translation) between Lidar and an RGB camera sensor. We use a planar checkerboard and place it inside the Field-of-View (FOV) of both sensors, where we extract the 3D plane information of the checkerboard acquired from the sensor’s data. The plane coefficients extracted from the sensor’s data are used to construct a well-structured set of 3D points. These 3D points are then ’aligned,’ which gives the relative transformation between the two sensors. We use our proposed Correntropy Similarity Matrix Iterative Closest Point (CoSMICP) Algorithm to estimate the relative transformation. This work uses a single frame of the point cloud data acquired from the Lidar sensor and a single frame from the calibrated camera data to perform this operation. From the camera image, we use the projection of the calibration target’s corner points to compute the 3D points, and along the process, we calculate the 3D plane equation using the corner points. We evaluate our approach on a simulated dataset with complex environment settings, making use of the freedom to assess under multiple configurations. Through the obtained results, we verify our method under various configurations.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Single Frame Lidar-Camera Calibration Using Registration of 3D Planes\",\"authors\":\"Ashutosh Singandhupe, Hung M. La, Q. Ha\",\"doi\":\"10.1109/IRC55401.2022.00076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work focuses on finding the extrinsic parameters (rotation and translation) between Lidar and an RGB camera sensor. We use a planar checkerboard and place it inside the Field-of-View (FOV) of both sensors, where we extract the 3D plane information of the checkerboard acquired from the sensor’s data. The plane coefficients extracted from the sensor’s data are used to construct a well-structured set of 3D points. These 3D points are then ’aligned,’ which gives the relative transformation between the two sensors. We use our proposed Correntropy Similarity Matrix Iterative Closest Point (CoSMICP) Algorithm to estimate the relative transformation. This work uses a single frame of the point cloud data acquired from the Lidar sensor and a single frame from the calibrated camera data to perform this operation. From the camera image, we use the projection of the calibration target’s corner points to compute the 3D points, and along the process, we calculate the 3D plane equation using the corner points. We evaluate our approach on a simulated dataset with complex environment settings, making use of the freedom to assess under multiple configurations. Through the obtained results, we verify our method under various configurations.\",\"PeriodicalId\":282759,\"journal\":{\"name\":\"2022 Sixth IEEE International Conference on Robotic Computing (IRC)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Sixth IEEE International Conference on Robotic Computing (IRC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IRC55401.2022.00076\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRC55401.2022.00076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

这项工作的重点是寻找激光雷达和RGB相机传感器之间的外在参数(旋转和平移)。我们使用一个平面棋盘,并将其放置在两个传感器的视场(FOV)内,在那里我们提取从传感器数据中获取的棋盘的3D平面信息。从传感器数据中提取的平面系数用于构建结构良好的三维点集。然后,这些3D点“对齐”,这就给出了两个传感器之间的相对转换。我们使用我们提出的相关相似性矩阵迭代最近点(CoSMICP)算法来估计相对变换。这项工作使用从激光雷达传感器获取的单帧点云数据和从校准相机数据获取的单帧数据来执行此操作。从摄像机图像中,利用标定目标角点的投影计算三维点,沿此过程,利用角点计算三维平面方程。我们在具有复杂环境设置的模拟数据集上评估了我们的方法,利用了在多种配置下评估的自由。通过得到的结果,我们在不同的配置下验证了我们的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Single Frame Lidar-Camera Calibration Using Registration of 3D Planes
This work focuses on finding the extrinsic parameters (rotation and translation) between Lidar and an RGB camera sensor. We use a planar checkerboard and place it inside the Field-of-View (FOV) of both sensors, where we extract the 3D plane information of the checkerboard acquired from the sensor’s data. The plane coefficients extracted from the sensor’s data are used to construct a well-structured set of 3D points. These 3D points are then ’aligned,’ which gives the relative transformation between the two sensors. We use our proposed Correntropy Similarity Matrix Iterative Closest Point (CoSMICP) Algorithm to estimate the relative transformation. This work uses a single frame of the point cloud data acquired from the Lidar sensor and a single frame from the calibrated camera data to perform this operation. From the camera image, we use the projection of the calibration target’s corner points to compute the 3D points, and along the process, we calculate the 3D plane equation using the corner points. We evaluate our approach on a simulated dataset with complex environment settings, making use of the freedom to assess under multiple configurations. Through the obtained results, we verify our method under various configurations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Improved Approach to 6D Object Pose Tracking in Fast Motion Scenarios Mechanical Exploration of the Design of Tactile Fingertips via Finite Element Analysis Generating Robot-Dependent Cost Maps for Off-Road Environments Using Locomotion Experiments and Earth Observation Data* Tracking Visual Landmarks of Opportunity as Rally Points for Unmanned Ground Vehicles Experimental Assessment of Feature-based Lidar Odometry and Mapping
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1