{"title":"基于视觉信息的激光三维紧耦合映射方法","authors":"Sixing Liu, Yan Chai, Rui Yuan, H. Miao","doi":"10.1108/ir-02-2023-0016","DOIUrl":null,"url":null,"abstract":"\nPurpose\nSimultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.\n\n\nDesign/methodology/approach\nThe visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.\n\n\nFindings\nExperiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.\n\n\nOriginality/value\nA multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.\n","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"61 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Laser 3D tightly coupled mapping method based on visual information\",\"authors\":\"Sixing Liu, Yan Chai, Rui Yuan, H. Miao\",\"doi\":\"10.1108/ir-02-2023-0016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nSimultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.\\n\\n\\nDesign/methodology/approach\\nThe visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.\\n\\n\\nFindings\\nExperiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.\\n\\n\\nOriginality/value\\nA multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.\\n\",\"PeriodicalId\":54987,\"journal\":{\"name\":\"Industrial Robot-The International Journal of Robotics Research and Application\",\"volume\":\"61 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Industrial Robot-The International Journal of Robotics Research and Application\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1108/ir-02-2023-0016\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Industrial Robot-The International Journal of Robotics Research and Application","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/ir-02-2023-0016","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
Laser 3D tightly coupled mapping method based on visual information
Purpose
Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.
Design/methodology/approach
The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.
Findings
Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.
Originality/value
A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.
期刊介绍:
Industrial Robot publishes peer reviewed research articles, technology reviews and specially commissioned case studies. Each issue includes high quality content covering all aspects of robotic technology, and reflecting the most interesting and strategically important research and development activities from around the world.
The journal’s policy of not publishing work that has only been tested in simulation means that only the very best and most practical research articles are included. This ensures that the material that is published has real relevance and value for commercial manufacturing and research organizations. Industrial Robot''s coverage includes, but is not restricted to:
Automatic assembly
Flexible manufacturing
Programming optimisation
Simulation and offline programming
Service robots
Autonomous robots
Swarm intelligence
Humanoid robots
Prosthetics and exoskeletons
Machine intelligence
Military robots
Underwater and aerial robots
Cooperative robots
Flexible grippers and tactile sensing
Robot vision
Teleoperation
Mobile robots
Search and rescue robots
Robot welding
Collision avoidance
Robotic machining
Surgical robots
Call for Papers 2020
AI for Autonomous Unmanned Systems
Agricultural Robot
Brain-Computer Interfaces for Human-Robot Interaction
Cooperative Robots
Robots for Environmental Monitoring
Rehabilitation Robots
Wearable Robotics/Exoskeletons.