H-SLAM: Hybrid direct–indirect visual SLAM

IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Robotics and Autonomous Systems Pub Date : 2024-06-06 DOI:10.1016/j.robot.2024.104729
Georges Younes , Douaa Khalil , John Zelek , Daniel Asmar
{"title":"H-SLAM: Hybrid direct–indirect visual SLAM","authors":"Georges Younes ,&nbsp;Douaa Khalil ,&nbsp;John Zelek ,&nbsp;Daniel Asmar","doi":"10.1016/j.robot.2024.104729","DOIUrl":null,"url":null,"abstract":"<div><p>The recent success of hybrid methods in monocular odometry has led to many attempts to generalize the performance gains to hybrid monocular SLAM. However, most attempts fall short in several respects, with the most prominent issue being the need for two different map representations (local and global maps), with each requiring different, computationally expensive, and often redundant processes to maintain. Moreover, these maps tend to drift with respect to each other, resulting in contradicting pose and scene estimates, and leading to catastrophic failure. In this paper, we propose a novel approach that makes use of descriptor sharing to generate a single inverse depth scene representation. This representation can be used locally, queried globally to perform loop closure, and has the ability to re-activate previously observed map points after redundant points are marginalized from the local map, eliminating the need for separate map maintenance processes. The maps generated by our method exhibit no drift between each other, and can be computed at a fraction of the computational cost and memory footprint required by other monocular SLAM systems. Despite the reduced resource requirements, the proposed approach maintains its robustness and accuracy, delivering performance comparable to state-of-the-art SLAM methods (<em>e.g</em>., LDSO, ORB-SLAM3) on the majority of sequences from well-known datasets like EuRoC, KITTI, and TUM VI. The source code is available at: <span>https://github.com/AUBVRL/fslam_ros_docker</span><svg><path></path></svg>.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"179 ","pages":"Article 104729"},"PeriodicalIF":4.3000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889024001131","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The recent success of hybrid methods in monocular odometry has led to many attempts to generalize the performance gains to hybrid monocular SLAM. However, most attempts fall short in several respects, with the most prominent issue being the need for two different map representations (local and global maps), with each requiring different, computationally expensive, and often redundant processes to maintain. Moreover, these maps tend to drift with respect to each other, resulting in contradicting pose and scene estimates, and leading to catastrophic failure. In this paper, we propose a novel approach that makes use of descriptor sharing to generate a single inverse depth scene representation. This representation can be used locally, queried globally to perform loop closure, and has the ability to re-activate previously observed map points after redundant points are marginalized from the local map, eliminating the need for separate map maintenance processes. The maps generated by our method exhibit no drift between each other, and can be computed at a fraction of the computational cost and memory footprint required by other monocular SLAM systems. Despite the reduced resource requirements, the proposed approach maintains its robustness and accuracy, delivering performance comparable to state-of-the-art SLAM methods (e.g., LDSO, ORB-SLAM3) on the majority of sequences from well-known datasets like EuRoC, KITTI, and TUM VI. The source code is available at: https://github.com/AUBVRL/fslam_ros_docker.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
H-SLAM:混合直接-间接视觉 SLAM
最近,混合方法在单目测距中取得了成功,因此很多人试图将其性能提升推广到混合单目SLAM中。然而,大多数尝试在几个方面都存在不足,其中最突出的问题是需要两种不同的地图表示(局部地图和全局地图),而每种地图都需要不同的、计算成本高昂且往往是多余的过程来维护。此外,这些地图往往会相互漂移,导致姿态和场景估计相互矛盾,从而导致灾难性故障。在本文中,我们提出了一种利用描述符共享生成单一反深度场景表示的新方法。这种表示法可在本地使用,也可在全局范围内查询以执行循环闭合,并能在本地地图中的冗余点被边缘化后重新激活先前观察到的地图点,从而无需单独的地图维护流程。我们的方法生成的地图不会相互漂移,而且计算成本和内存占用仅为其他单目 SLAM 系统的一小部分。尽管对资源的要求降低了,但所提出的方法仍保持了其鲁棒性和准确性,在 EuRoC、KITTI 和 TUM VI 等著名数据集的大多数序列上,其性能可与最先进的 SLAM 方法(如 LDSO、ORB-SLAM3)相媲美。源代码可在以下网址获取:https://github.com/AUBVRL/fslam_ros_docker。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Robotics and Autonomous Systems
Robotics and Autonomous Systems 工程技术-机器人学
CiteScore
9.00
自引率
7.00%
发文量
164
审稿时长
4.5 months
期刊介绍: Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems. Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.
期刊最新文献
Editorial Board A sensorless approach for cable failure detection and identification in cable-driven parallel robots Learning latent causal factors from the intricate sensor feedback of contact-rich robotic assembly tasks GPS-free autonomous navigation in cluttered tree rows with deep semantic segmentation Robust trajectory tracking for omnidirectional robots by means of anti-peaking linear active disturbance rejection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1