In the intelligent development of mobile platforms such as drones and service robots, simultaneous localization and mapping (SLAM) in real-world environments serves as a core technical support. Although existing point-line fusion visual-inertial SLAM can improve localization performance in low-texture environments, the high computational cost of feature extraction limits its real-time performance. To address this issue, this paper proposes an Adaptive Point-Line Feature Fusion Visual-Inertial SLAM Algorithm (ATPL-VIO), aiming to simultaneously enhance the algorithm’s real-time performance and localization accuracy. First, by introducing the number of feature points and inter-frame velocity to partition the scene, the fusion method of line features in the front end is dynamically adjusted, and the frequency of line feature extraction is controlled to strengthen data association, system robustness, and real-time performance. Second, based on the characteristics of the scene, high-quality line features are selected from segment length and direction to participate in nonlinear optimization, achieving high-precision localization with a small number of effective features while reducing computational time. Finally, the time-dimensional data association is enhanced by combining visual and IMU data within a sliding window, further reducing localization errors. Experimental results show that the proposed algorithm reduces the maximum error by 28.26 %, the root mean square error by 20.49 %, and improves real-time performance by 12.60 % with only 23.85 % of the line segment features used, compared to PL-VIO. Additionally, both indoor and outdoor experiments demonstrate the advantages of the proposed algorithm in terms of localization accuracy and mapping performance.
扫码关注我们
求助内容:
应助结果提醒方式:
