视觉惯性里程计中实时密集映射的时间一致深度恢复嵌入

Hui Cheng, Zhuoqi Zheng, Jinhao He, Chongyu Chen, Keze Wang, Liang Lin
{"title":"视觉惯性里程计中实时密集映射的时间一致深度恢复嵌入","authors":"Hui Cheng, Zhuoqi Zheng, Jinhao He, Chongyu Chen, Keze Wang, Liang Lin","doi":"10.1109/IROS.2018.8593917","DOIUrl":null,"url":null,"abstract":"Dense mapping is always the desire of simultaneous localization and mapping (SLAM), especially for the applications that require fast and dense scene information. Visual-inertial odometry (VIO) is a light-weight and effective solution to fast self-localization. However, VIO-based SLAM systems have difficulty in providing dense mapping results due to the spatial sparsity and temporal instability of the VIO depth estimations. Although there have been great efforts on real-time mapping and depth recovery from sparse measurements, the existing solutions for VIO-based SLAM still fail to preserve sufficient geometry details in their results. In this paper, we propose to embed depth recovery into VIO-based SLAM for real-time dense mapping. In the proposed method, we present a subspace-based stabilization scheme to maintain the temporal consistency and design a hierarchical pipeline for edge-preserving depth interpolation to reduce the computational burden. Numerous experiments demonstrate that our method can achieve an accuracy improvement of up to 49.1 cm compared to state-of-the-art learning-based methods for depth recovery and reconstruct sufficient geometric details in dense mapping when only 0.07% depth samples are available. Since a simple CPU implementation of our method already runs at 10–20 fps, we believe our method is very favorable for practical SLAM systems with critical computational requirements.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"8 1","pages":"693-698"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Embedding Temporally Consistent Depth Recovery for Real-time Dense Mapping in Visual-inertial Odometry\",\"authors\":\"Hui Cheng, Zhuoqi Zheng, Jinhao He, Chongyu Chen, Keze Wang, Liang Lin\",\"doi\":\"10.1109/IROS.2018.8593917\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dense mapping is always the desire of simultaneous localization and mapping (SLAM), especially for the applications that require fast and dense scene information. Visual-inertial odometry (VIO) is a light-weight and effective solution to fast self-localization. However, VIO-based SLAM systems have difficulty in providing dense mapping results due to the spatial sparsity and temporal instability of the VIO depth estimations. Although there have been great efforts on real-time mapping and depth recovery from sparse measurements, the existing solutions for VIO-based SLAM still fail to preserve sufficient geometry details in their results. In this paper, we propose to embed depth recovery into VIO-based SLAM for real-time dense mapping. In the proposed method, we present a subspace-based stabilization scheme to maintain the temporal consistency and design a hierarchical pipeline for edge-preserving depth interpolation to reduce the computational burden. Numerous experiments demonstrate that our method can achieve an accuracy improvement of up to 49.1 cm compared to state-of-the-art learning-based methods for depth recovery and reconstruct sufficient geometric details in dense mapping when only 0.07% depth samples are available. Since a simple CPU implementation of our method already runs at 10–20 fps, we believe our method is very favorable for practical SLAM systems with critical computational requirements.\",\"PeriodicalId\":6640,\"journal\":{\"name\":\"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"volume\":\"8 1\",\"pages\":\"693-698\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2018.8593917\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2018.8593917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

密集映射一直是同步定位和地图绘制(SLAM)的愿望,特别是对于需要快速和密集的场景信息的应用。视觉惯性里程计(VIO)是一种轻量级、有效的快速自定位解决方案。然而,由于VIO深度估计的空间稀疏性和时间不稳定性,基于VIO的SLAM系统难以提供密集的映射结果。尽管在稀疏测量的实时测绘和深度恢复方面已经做出了很大的努力,但现有的基于vio的SLAM解决方案仍然无法在结果中保留足够的几何细节。本文提出将深度恢复嵌入到基于vio的SLAM中,实现实时密集映射。在该方法中,我们提出了一种基于子空间的稳定方案来保持时间一致性,并设计了一个分层管道来保持边缘深度插值以减少计算负担。大量实验表明,与最先进的基于学习的深度恢复方法相比,我们的方法可以实现高达49.1 cm的精度提高,并且在只有0.07%深度样本可用的情况下,可以在密集映射中重建足够的几何细节。由于我们的方法的一个简单的CPU实现已经运行在10-20 fps,我们相信我们的方法非常适合具有关键计算需求的实际SLAM系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Embedding Temporally Consistent Depth Recovery for Real-time Dense Mapping in Visual-inertial Odometry
Dense mapping is always the desire of simultaneous localization and mapping (SLAM), especially for the applications that require fast and dense scene information. Visual-inertial odometry (VIO) is a light-weight and effective solution to fast self-localization. However, VIO-based SLAM systems have difficulty in providing dense mapping results due to the spatial sparsity and temporal instability of the VIO depth estimations. Although there have been great efforts on real-time mapping and depth recovery from sparse measurements, the existing solutions for VIO-based SLAM still fail to preserve sufficient geometry details in their results. In this paper, we propose to embed depth recovery into VIO-based SLAM for real-time dense mapping. In the proposed method, we present a subspace-based stabilization scheme to maintain the temporal consistency and design a hierarchical pipeline for edge-preserving depth interpolation to reduce the computational burden. Numerous experiments demonstrate that our method can achieve an accuracy improvement of up to 49.1 cm compared to state-of-the-art learning-based methods for depth recovery and reconstruct sufficient geometric details in dense mapping when only 0.07% depth samples are available. Since a simple CPU implementation of our method already runs at 10–20 fps, we believe our method is very favorable for practical SLAM systems with critical computational requirements.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On-Chip Virtual Vortex Gear and Its Application Classification of Hanging Garments Using Learned Features Extracted from 3D Point Clouds Deep Sequential Models for Sampling-Based Planning An Adjustable Force Sensitive Sensor with an Electromagnet for a Soft, Distributed, Digital 3-axis Skin Sensor Sliding-Layer Laminates: A Robotic Material Enabling Robust and Adaptable Undulatory Locomotion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1