基于视觉信息的激光三维紧耦合映射方法

IF 1.9 4区 计算机科学 Q3 ENGINEERING, INDUSTRIAL Industrial Robot-The International Journal of Robotics Research and Application Pub Date : 2023-04-07 DOI:10.1108/ir-02-2023-0016
Sixing Liu, Yan Chai, Rui Yuan, H. Miao
{"title":"基于视觉信息的激光三维紧耦合映射方法","authors":"Sixing Liu, Yan Chai, Rui Yuan, H. Miao","doi":"10.1108/ir-02-2023-0016","DOIUrl":null,"url":null,"abstract":"\nPurpose\nSimultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.\n\n\nDesign/methodology/approach\nThe visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.\n\n\nFindings\nExperiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.\n\n\nOriginality/value\nA multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.\n","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Laser 3D tightly coupled mapping method based on visual information\",\"authors\":\"Sixing Liu, Yan Chai, Rui Yuan, H. Miao\",\"doi\":\"10.1108/ir-02-2023-0016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nSimultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.\\n\\n\\nDesign/methodology/approach\\nThe visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.\\n\\n\\nFindings\\nExperiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.\\n\\n\\nOriginality/value\\nA multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.\\n\",\"PeriodicalId\":54987,\"journal\":{\"name\":\"Industrial Robot-The International Journal of Robotics Research and Application\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Industrial Robot-The International Journal of Robotics Research and Application\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1108/ir-02-2023-0016\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Industrial Robot-The International Journal of Robotics Research and Application","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/ir-02-2023-0016","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0

摘要

目的:同时定位与地图构建(SLAM)作为一种状态估计问题,是解决未知环境下自动驾驶汽车运动问题的前提。现有的算法是基于激光或视觉里程计;然而,激光雷达的传感范围小,数据特征量少,相机容易受到外界条件的影响,使用单个传感器无法稳定准确地进行定位和地图构建。本文旨在提出一种融合视觉信息的激光三维紧密耦合地图构建方法,并利用激光点云信息和图像信息相辅相成,提高算法的整体性能。设计/方法/方法首先在方法前端匹配视觉特征点,然后使用双向随机样本一致性(RANSAC)算法去除不匹配的点对。然后利用激光点云获取其深度信息,同时将两类特征点送入位姿估计模块,利用启发式模拟退火算法求解紧密耦合的局部束平差。最后,将视觉词袋模型融合到激光点云信息中建立阈值,构建环回框架,进一步减小系统随时间的累积漂移误差。在公开数据集上的实验表明,本文提出的方法可以很好地匹配其真实轨迹。对于各种场景,可以使用互补的激光和视觉传感器来构建地图,具有较高的精度和鲁棒性。同时,利用自主行走采集平台在真实环境中对该方法进行了验证,加载该方法的系统能够长时间良好运行,并兼顾了多场景的环境适应性。提出了一种多传感器数据紧密耦合的方法,融合激光和视觉信息,实现位置姿态的最优解。采用双向RANSAC算法去除视觉上不匹配的点对。进一步,利用定向快速旋转的简短特征点构建词袋模型,构建实时环回框架,减少误差积累。实验验证结果表明,单传感器SLAM算法的精度和鲁棒性得到了提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Laser 3D tightly coupled mapping method based on visual information
Purpose Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm. Design/methodology/approach The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time. Findings Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes. Originality/value A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.50
自引率
16.70%
发文量
86
审稿时长
5.7 months
期刊介绍: Industrial Robot publishes peer reviewed research articles, technology reviews and specially commissioned case studies. Each issue includes high quality content covering all aspects of robotic technology, and reflecting the most interesting and strategically important research and development activities from around the world. The journal’s policy of not publishing work that has only been tested in simulation means that only the very best and most practical research articles are included. This ensures that the material that is published has real relevance and value for commercial manufacturing and research organizations. Industrial Robot''s coverage includes, but is not restricted to: Automatic assembly Flexible manufacturing Programming optimisation Simulation and offline programming Service robots Autonomous robots Swarm intelligence Humanoid robots Prosthetics and exoskeletons Machine intelligence Military robots Underwater and aerial robots Cooperative robots Flexible grippers and tactile sensing Robot vision Teleoperation Mobile robots Search and rescue robots Robot welding Collision avoidance Robotic machining Surgical robots Call for Papers 2020 AI for Autonomous Unmanned Systems Agricultural Robot Brain-Computer Interfaces for Human-Robot Interaction Cooperative Robots Robots for Environmental Monitoring Rehabilitation Robots Wearable Robotics/Exoskeletons.
期刊最新文献
Research on dynamic parameter identification and collision detection method for cooperative robots Sequential calibration of transmission ratios for joints of 6-DOF serial industrial robots based on laser tracker Design and analysis of a continuum manipulator for use in narrow spaces Tightly coupled IMU-Laser-RTK odometry algorithm for underground multi-layer and large-scale environment Design, modeling and kinematic analysis of a multi-configuration dexterous hand with integrated high-dimensional sensors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1