σ-DVO: Sensor Noise Model Meets Dense Visual Odometry

B. W. Babu, Soohwan Kim, Zhixin Yan, Liu Ren
{"title":"σ-DVO: Sensor Noise Model Meets Dense Visual Odometry","authors":"B. W. Babu, Soohwan Kim, Zhixin Yan, Liu Ren","doi":"10.1109/ISMAR.2016.11","DOIUrl":null,"url":null,"abstract":"In this paper we propose a novel method called s-DVO for dense visual odometry using a probabilistic sensor noise model. In contrast to sparse visual odometry, where camera poses are estimated based on matched visual features, we apply dense visual odometry which makes full use of all pixel information from an RGB-D camera. Previously, t-distribution was used to model photometric and geometric errors in order to reduce the impacts of outliers in the optimization. However, this approach has the limitation that it only uses the error value to determine outliers without considering the physical process. Therefore, we propose to apply a probabilistic sensor noise model to weigh each pixel by propagating linearized uncertainty. Furthermore, we find that the geometric errors are well represented with the sensor noise model, while the photometric errors are not. Finally we propose a hybrid approach which combines t-distribution for photometric errors and a probabilistic sensor noise model for geometric errors. We extend the dense visual odometry and develop a visual SLAM system that incorporates keyframe generation, loop constraint detection and graph optimization. Experimental results with standard benchmark datasets show that our algorithm outperforms previous methods by about a 25% reduction in the absolute trajectory error.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"2002 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2016.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

In this paper we propose a novel method called s-DVO for dense visual odometry using a probabilistic sensor noise model. In contrast to sparse visual odometry, where camera poses are estimated based on matched visual features, we apply dense visual odometry which makes full use of all pixel information from an RGB-D camera. Previously, t-distribution was used to model photometric and geometric errors in order to reduce the impacts of outliers in the optimization. However, this approach has the limitation that it only uses the error value to determine outliers without considering the physical process. Therefore, we propose to apply a probabilistic sensor noise model to weigh each pixel by propagating linearized uncertainty. Furthermore, we find that the geometric errors are well represented with the sensor noise model, while the photometric errors are not. Finally we propose a hybrid approach which combines t-distribution for photometric errors and a probabilistic sensor noise model for geometric errors. We extend the dense visual odometry and develop a visual SLAM system that incorporates keyframe generation, loop constraint detection and graph optimization. Experimental results with standard benchmark datasets show that our algorithm outperforms previous methods by about a 25% reduction in the absolute trajectory error.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
σ-DVO:传感器噪声模型满足密集视觉里程计
在本文中,我们提出了一种新的方法称为s-DVO密集视觉里程计使用概率传感器噪声模型。稀疏视觉里程法是根据匹配的视觉特征估计相机姿态,而密集视觉里程法则充分利用了RGB-D相机的所有像素信息。以前,为了减少优化过程中异常值的影响,使用t分布来模拟光度和几何误差。然而,这种方法的局限性在于它只使用误差值来确定异常值,而不考虑物理过程。因此,我们建议应用概率传感器噪声模型,通过传播线性化的不确定性来衡量每个像素。此外,我们发现传感器噪声模型可以很好地表示几何误差,而光度误差则不能。最后,我们提出了一种混合方法,该方法结合了t分布的光度误差和概率传感器噪声模型的几何误差。我们扩展了密集视觉里程计,并开发了一个包含关键帧生成,环路约束检测和图形优化的视觉SLAM系统。在标准基准数据集上的实验结果表明,我们的算法比以前的方法在绝对轨迹误差上降低了约25%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Influence of using Augmented Reality on Textbook Support for Learners of Different Learning Styles Practical and Precise Projector-Camera Calibration Augmented Reality 3D Discrepancy Check in Industrial Applications Learning to Fuse: A Deep Learning Approach to Visual-Inertial Camera Pose Estimation Analysis of Medium Wrap Freehand Virtual Object Grasping in Exocentric Mixed Reality
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1