Depth-Visual-Inertial (DVI) Mapping System for Robust Indoor 3D Reconstruction

IF 4.6 2区 计算机科学 Q2 ROBOTICS IEEE Robotics and Automation Letters Pub Date : 2024-10-29 DOI:10.1109/LRA.2024.3487496
Charles Hamesse;Michiel Vlaminck;Hiep Luong;Rob Haelterman
{"title":"Depth-Visual-Inertial (DVI) Mapping System for Robust Indoor 3D Reconstruction","authors":"Charles Hamesse;Michiel Vlaminck;Hiep Luong;Rob Haelterman","doi":"10.1109/LRA.2024.3487496","DOIUrl":null,"url":null,"abstract":"We propose the \n<underline>D</u>\nepth-\n<underline>V</u>\nisual-\n<underline>I</u>\nnertial (DVI) mapping system: a robust multi-sensor fusion framework for dense 3D mapping using time-of-flight cameras equipped with RGB and IMU sensors. Inspired by recent developments in real-time LiDAR-based odometry and mapping, our system uses an error-state iterative Kalman filter for state estimation: it processes the inertial sensor's data for state propagation, followed by a state update first using visual-inertial odometry, then depth-based odometry. This sensor fusion scheme makes our system robust to degenerate scenarios (e.g. lack of visual or geometrical features, fast rotations) and to noisy sensor data, like those that can be obtained with off-the-shelf time-of-flight DVI sensors. For evaluation, we propose the new Bunker DVI Dataset, featuring data from multiple DVI sensors recorded in challenging conditions reflecting search-and-rescue operations. We show the superior robustness and precision of our method against previous work. Following the open science principle, we make both our source code and dataset publicly available.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"9 12","pages":"11313-11320"},"PeriodicalIF":4.6000,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10737432/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

We propose the D epth- V isual- I nertial (DVI) mapping system: a robust multi-sensor fusion framework for dense 3D mapping using time-of-flight cameras equipped with RGB and IMU sensors. Inspired by recent developments in real-time LiDAR-based odometry and mapping, our system uses an error-state iterative Kalman filter for state estimation: it processes the inertial sensor's data for state propagation, followed by a state update first using visual-inertial odometry, then depth-based odometry. This sensor fusion scheme makes our system robust to degenerate scenarios (e.g. lack of visual or geometrical features, fast rotations) and to noisy sensor data, like those that can be obtained with off-the-shelf time-of-flight DVI sensors. For evaluation, we propose the new Bunker DVI Dataset, featuring data from multiple DVI sensors recorded in challenging conditions reflecting search-and-rescue operations. We show the superior robustness and precision of our method against previous work. Following the open science principle, we make both our source code and dataset publicly available.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于稳健室内三维重建的深度-视觉-惯性(DVI)映射系统
我们提出了深度-视觉-惯性(DVI)测绘系统:这是一种稳健的多传感器融合框架,用于使用配备了 RGB 和 IMU 传感器的飞行时间照相机进行密集 3D 测绘。受基于激光雷达的实时里程测量和制图的最新发展的启发,我们的系统使用误差状态迭代卡尔曼滤波器进行状态估计:它处理惯性传感器的数据进行状态传播,然后首先使用视觉惯性里程测量进行状态更新,接着使用基于深度的里程测量进行状态更新。这种传感器融合方案使我们的系统对退化场景(如缺乏视觉或几何特征、快速旋转)和嘈杂的传感器数据(如使用现成的飞行时间 DVI 传感器获得的数据)具有鲁棒性。为了进行评估,我们提出了新的掩体 DVI 数据集,该数据集由多个 DVI 传感器在具有挑战性的条件下记录的数据组成,反映了搜救行动的情况。与之前的研究相比,我们的方法具有更高的鲁棒性和精确性。遵循开放科学原则,我们公开了源代码和数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters Computer Science-Computer Science Applications
CiteScore
9.60
自引率
15.40%
发文量
1428
期刊介绍: The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.
期刊最新文献
Correction To: “Design Models and Performance Analysis for a Novel Shape Memory Alloy-Actuated Wearable Hand Exoskeleton for Rehabilitation” NavTr: Object-Goal Navigation With Learnable Transformer Queries A Diffusion-Based Data Generator for Training Object Recognition Models in Ultra-Range Distance Position Prediction for Space Teleoperation With SAO-CNN-BiGRU-Attention Algorithm MR-ULINS: A Tightly-Coupled UWB-LiDAR-Inertial Estimator With Multi-Epoch Outlier Rejection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1