Gang Peng, Chong Cao, Bocheng Chen, Lu Hu, Dingxin He
{"title":"Robust LiDAR visual inertial odometry for dynamic scenes","authors":"Gang Peng, Chong Cao, Bocheng Chen, Lu Hu, Dingxin He","doi":"10.1088/1361-6501/ad57dc","DOIUrl":null,"url":null,"abstract":"\n The traditional visual inertial simultaneous localisation and mapping (SLAM) system does not fully consider the dynamic objects in the scene, which can reduce the quality of visual feature point matching. In addition, dynamic objects in the scene can cause illumination changes which reduce the performance of the visual front end and loop closure detection of the system. To address this problem, this study combines 3D light detection and ranging (LiDAR), camera, and inertial measurement units (IMUs) in a tightly coupled manner to estimate the pose of mobile robots, thereby proposing a robust LiDAR visual inertial odometry that can effectively filter out dynamic feature points. In addition, a dynamic feature point detection algorithm with attention mechanism is introduced for target detection and optical flow tracking. In experimental analyses on public datasets and real indoor scenes, the proposed method improved the accuracy and robustness of pose estimation in scenes with dynamic objects and varying illumination compared with traditional methods.","PeriodicalId":18526,"journal":{"name":"Measurement Science and Technology","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Measurement Science and Technology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6501/ad57dc","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
The traditional visual inertial simultaneous localisation and mapping (SLAM) system does not fully consider the dynamic objects in the scene, which can reduce the quality of visual feature point matching. In addition, dynamic objects in the scene can cause illumination changes which reduce the performance of the visual front end and loop closure detection of the system. To address this problem, this study combines 3D light detection and ranging (LiDAR), camera, and inertial measurement units (IMUs) in a tightly coupled manner to estimate the pose of mobile robots, thereby proposing a robust LiDAR visual inertial odometry that can effectively filter out dynamic feature points. In addition, a dynamic feature point detection algorithm with attention mechanism is introduced for target detection and optical flow tracking. In experimental analyses on public datasets and real indoor scenes, the proposed method improved the accuracy and robustness of pose estimation in scenes with dynamic objects and varying illumination compared with traditional methods.
期刊介绍:
Measurement Science and Technology publishes articles on new measurement techniques and associated instrumentation. Papers that describe experiments must represent an advance in measurement science or measurement technique rather than the application of established experimental technique. Bearing in mind the multidisciplinary nature of the journal, authors must provide an introduction to their work that makes clear the novelty, significance, broader relevance of their work in a measurement context and relevance to the readership of Measurement Science and Technology. All submitted articles should contain consideration of the uncertainty, precision and/or accuracy of the measurements presented.
Subject coverage includes the theory, practice and application of measurement in physics, chemistry, engineering and the environmental and life sciences from inception to commercial exploitation. Publications in the journal should emphasize the novelty of reported methods, characterize them and demonstrate their performance using examples or applications.