Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images

IF 1.4 Q4 ROBOTICS Journal of Robotics Pub Date : 2012-09-06 DOI:10.1155/2012/797063
David García, L. F. Rojo, A. G. Aparicio, L. P. Castelló, Ó. R. García
{"title":"Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images","authors":"David García, L. F. Rojo, A. G. Aparicio, L. P. Castelló, Ó. R. García","doi":"10.1155/2012/797063","DOIUrl":null,"url":null,"abstract":"In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.","PeriodicalId":51834,"journal":{"name":"Journal of Robotics","volume":"18 1","pages":"1-13"},"PeriodicalIF":1.4000,"publicationDate":"2012-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2012/797063","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2012/797063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 27

Abstract

In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于全向图像的基于外观和特征的视觉里程计
在移动自主机器人领域,视觉里程计需要仅通过相机传感器检索机器人两个连续姿态之间的运动变换。视觉里程计为定位和SLAM(同时定位和映射)等问题的轨迹估计提供了必要的信息。在这项工作中,我们提出了一种基于单个全向相机的运动估计方法。我们利用了这台相机提供的最大水平视野,这使我们能够将大型场景信息编码到同一张图像中。由于只需要处理两个连续的全向图像,因此两个姿态之间的运动变换的估计是增量计算的。特别是,我们利用全向图像收集的信息的多功能性来执行基于外观和基于特征的方法来获得视觉里程计结果。我们在真实的室内环境中进行了一组实验,以检验这两种方法的有效性和适用性。实验中使用的数据包括在三种不同的真实场景中沿着机器人轨迹捕获的大量全方位图像。实验结果证明了估计的准确性和两种方法的实时性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.70
自引率
5.60%
发文量
77
审稿时长
22 weeks
期刊介绍: Journal of Robotics publishes papers on all aspects automated mechanical devices, from their design and fabrication, to their testing and practical implementation. The journal welcomes submissions from the associated fields of materials science, electrical and computer engineering, and machine learning and artificial intelligence, that contribute towards advances in the technology and understanding of robotic systems.
期刊最新文献
Visual Localization of an Internal Inspection Robot for the Oil-Immersed Transformer Retracted: Online Control Method of Small- and Medium-Sized Electromechanical Equipment Based on Deep Neural Network Retracted: Machinery Changes and Challenges of Architecture and Landscape Design in the Virtual Reality Perspective Retracted: Application of 5G Mobile Communication Technology Integrating Robot Controller Communication Method in Communication Engineering Retracted: Designing and Manufacturing of Industrial Robots with Dual-Angle Sensors Taking into Account Vibration Signal Fusion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1