GSLAM: Initialization-Robust Monocular Visual SLAM via Global Structure-from-Motion

Chengzhou Tang, Oliver Wang, P. Tan
{"title":"GSLAM: Initialization-Robust Monocular Visual SLAM via Global Structure-from-Motion","authors":"Chengzhou Tang, Oliver Wang, P. Tan","doi":"10.1109/3DV.2017.00027","DOIUrl":null,"url":null,"abstract":"Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.","PeriodicalId":91162,"journal":{"name":"Proceedings. International Conference on 3D Vision","volume":"6 1","pages":"155-164"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. International Conference on 3D Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DV.2017.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GSLAM:初始化-基于全局结构-运动的鲁棒单目SLAM
许多单目视觉SLAM算法都是由运动增量结构(SfM)方法衍生而来。这项工作提出了一种新的单目SLAM方法,该方法集成了全球SfM的最新进展。特别是,我们提出了视觉SLAM的两个主要贡献。首先,我们采用一种新的对地图初始化误差具有较强鲁棒性的秩1矩阵分解技术来解决视觉里程计问题。其次,我们采用了一种最新的全局SfM方法进行姿态图优化,这导致了一个多阶段的线性公式,并使L1优化具有更好的假循环鲁棒性。这两种方法的结合产生了更强大的重建,并且比最近最先进的SLAM系统快得多(4倍)。我们还提供了一个新的数据集,记录了Vicon运动捕捉室中的地面真实摄像机运动,并将我们的方法与先前的系统和已建立的基准数据集进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Message from the Program Chairs: 3DV 2022 Message from the 3DV 2020 Program Chairs Performance Evaluation of 3D Correspondence Grouping Algorithms SEGCloud: Semantic Segmentation of 3D Point Clouds GSLAM: Initialization-Robust Monocular Visual SLAM via Global Structure-from-Motion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1