FAST-LIVO2: Fast, Direct LiDAR–Inertial–Visual Odometry

IF 9.4 1区 计算机科学 Q1 ROBOTICS IEEE Transactions on Robotics Pub Date : 2024-11-19 DOI:10.1109/TRO.2024.3502198
Chunran Zheng;Wei Xu;Zuhao Zou;Tong Hua;Chongjian Yuan;Dongjiao He;Bingyang Zhou;Zheng Liu;Jiarong Lin;Fangcheng Zhu;Yunfan Ren;Rong Wang;Fanle Meng;Fu Zhang
{"title":"FAST-LIVO2: Fast, Direct LiDAR–Inertial–Visual Odometry","authors":"Chunran Zheng;Wei Xu;Zuhao Zou;Tong Hua;Chongjian Yuan;Dongjiao He;Bingyang Zhou;Zheng Liu;Jiarong Lin;Fangcheng Zhu;Yunfan Ren;Rong Wang;Fanle Meng;Fu Zhang","doi":"10.1109/TRO.2024.3502198","DOIUrl":null,"url":null,"abstract":"This paper presents FAST-LIVO2, a fast and direct LiDAR-inertial-visual odometry framework designed for accurate and robust state estimation in SLAM tasks, enabling real-time robotic applications. FAST-LIVO2 integrates IMU, LiDAR, and image data through an efficient error-state iterated Kalman filter (ESIKF). To address the dimensional mismatch between LiDAR and image measurements, we adopt a sequential update strategy. Efficiency is further enhanced using direct methods for LiDAR and visual data fusion: the LiDAR module registers raw points without extracting features, while the visual module minimizes photometric errors without relying on feature extraction. Both LiDAR and visual measurements are fused into a unified voxel map. The LiDAR module constructs the geometric structure, while the visual module links image patches to LiDAR points, enabling precise image alignment. Plane priors from LiDAR points improve alignment accuracy and are refined dynamically during the process. Additionally, an on-demand raycast operation and real-time image exposure estimation enhance robustness. Extensive experiments on benchmark and custom datasets demonstrate that FAST-LIVO2 outperforms state-of-the-art systems in accuracy, robustness, and efficiency. Key modules are validated, and we showcase three applications: UAV navigation highlighting real-time capabilities, airborne mapping demonstrating high accuracy, and 3D model rendering (mesh-based and NeRF-based) showcasing suitability for dense mapping. Code and datasets are open-sourced on GitHub to benefit the robotics community.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"326-346"},"PeriodicalIF":9.4000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Robotics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10757429/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents FAST-LIVO2, a fast and direct LiDAR-inertial-visual odometry framework designed for accurate and robust state estimation in SLAM tasks, enabling real-time robotic applications. FAST-LIVO2 integrates IMU, LiDAR, and image data through an efficient error-state iterated Kalman filter (ESIKF). To address the dimensional mismatch between LiDAR and image measurements, we adopt a sequential update strategy. Efficiency is further enhanced using direct methods for LiDAR and visual data fusion: the LiDAR module registers raw points without extracting features, while the visual module minimizes photometric errors without relying on feature extraction. Both LiDAR and visual measurements are fused into a unified voxel map. The LiDAR module constructs the geometric structure, while the visual module links image patches to LiDAR points, enabling precise image alignment. Plane priors from LiDAR points improve alignment accuracy and are refined dynamically during the process. Additionally, an on-demand raycast operation and real-time image exposure estimation enhance robustness. Extensive experiments on benchmark and custom datasets demonstrate that FAST-LIVO2 outperforms state-of-the-art systems in accuracy, robustness, and efficiency. Key modules are validated, and we showcase three applications: UAV navigation highlighting real-time capabilities, airborne mapping demonstrating high accuracy, and 3D model rendering (mesh-based and NeRF-based) showcasing suitability for dense mapping. Code and datasets are open-sourced on GitHub to benefit the robotics community.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FAST-LIVO2:快速、直接的激光雷达-惯性-目视测距仪
本文介绍了fast - livo2,这是一种快速直接的激光雷达-惯性视觉里程计框架,专为SLAM任务中的精确和鲁棒状态估计而设计,可实现实时机器人应用。FAST-LIVO2通过有效的错误状态迭代卡尔曼滤波器(ESIKF)集成了IMU, LiDAR和图像数据。为了解决激光雷达和图像测量之间的尺寸不匹配问题,我们采用了顺序更新策略。使用激光雷达和视觉数据融合的直接方法进一步提高了效率:激光雷达模块在不提取特征的情况下注册原始点,而视觉模块在不依赖特征提取的情况下最小化光度误差。激光雷达和视觉测量融合成统一的体素地图。激光雷达模块构建几何结构,而视觉模块将图像补丁链接到激光雷达点,从而实现精确的图像对齐。来自激光雷达点的平面先验提高了对准精度,并在此过程中进行了动态细化。此外,按需光线投射操作和实时图像曝光估计增强了鲁棒性。在基准和自定义数据集上进行的大量实验表明,FAST-LIVO2在准确性、鲁棒性和效率方面优于最先进的系统。关键模块经过验证,我们展示了三种应用:突出实时功能的无人机导航,展示高精度的航空测绘,以及展示密集测绘适用性的3D模型渲染(基于网格和基于nerf)。代码和数据集在GitHub上开源,以造福机器人社区。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Robotics
IEEE Transactions on Robotics 工程技术-机器人学
CiteScore
14.90
自引率
5.10%
发文量
259
审稿时长
6.0 months
期刊介绍: The IEEE Transactions on Robotics (T-RO) is dedicated to publishing fundamental papers covering all facets of robotics, drawing on interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, and beyond. From industrial applications to service and personal assistants, surgical operations to space, underwater, and remote exploration, robots and intelligent machines play pivotal roles across various domains, including entertainment, safety, search and rescue, military applications, agriculture, and intelligent vehicles. Special emphasis is placed on intelligent machines and systems designed for unstructured environments, where a significant portion of the environment remains unknown and beyond direct sensing or control.
期刊最新文献
Containment Control of Multi-Robot Systems With Non-Uniform Time-Varying Delays Model-Based Robust Position Control of an Underactuated Dielectric Elastomer Soft Robot Continuously Shaping Prioritized Jacobian Approach for Hierarchical Optimal Control with Task Priority Transition Distributed Coverage Control for Time-Varying Spatial Processes Ambilateral Activity Recognition and Continuous Adaptation with a Powered Knee-Ankle Prosthesis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1