A new strategy for improving the self-positioning precision of an autonomous mobile robot

An Zhanfu, Pei Dong, Yong HongWu, Wang Quanzhou
{"title":"A new strategy for improving the self-positioning precision of an autonomous mobile robot","authors":"An Zhanfu, Pei Dong, Yong HongWu, Wang Quanzhou","doi":"10.1109/ICOT.2014.6956605","DOIUrl":null,"url":null,"abstract":"We address the problem of precise self-positioning of an autonomous mobile robot. This problem is formulated as a manifold perception algorithm such that the precision position of a mobile robot is evaluated based on the distance from an obstacle, critical features or signs of surroundings and the depth of its surrounding images. We propose to accurately localize the position of a mobile robot using an algorithm that fusing the local plane coordinates information getting from laser ranging and space visual information represented by features of a depth image with variational weights, by which the local distance information of laser ranging and depth vision information are relatively complemented. First, we utilize EKF algorithm on the data gathered by laser to get coarse location of a robot, then open RGB-D camera to capture depth images and we extract SURF features of images, when the features are matched with training examples, the RANSAC algorithm is used to check consistency of spatial structures. Finally, extensive experiments show that our fusion method has significantly improved location results of accuracy compared with the results using either EKF on laser data or SURF features matching on depth images. Especially, experiments with variational fusion weights demonstrated that with this method our robot was capable of accomplishing self-location precisely in real time.","PeriodicalId":343641,"journal":{"name":"2014 International Conference on Orange Technologies","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Orange Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOT.2014.6956605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

We address the problem of precise self-positioning of an autonomous mobile robot. This problem is formulated as a manifold perception algorithm such that the precision position of a mobile robot is evaluated based on the distance from an obstacle, critical features or signs of surroundings and the depth of its surrounding images. We propose to accurately localize the position of a mobile robot using an algorithm that fusing the local plane coordinates information getting from laser ranging and space visual information represented by features of a depth image with variational weights, by which the local distance information of laser ranging and depth vision information are relatively complemented. First, we utilize EKF algorithm on the data gathered by laser to get coarse location of a robot, then open RGB-D camera to capture depth images and we extract SURF features of images, when the features are matched with training examples, the RANSAC algorithm is used to check consistency of spatial structures. Finally, extensive experiments show that our fusion method has significantly improved location results of accuracy compared with the results using either EKF on laser data or SURF features matching on depth images. Especially, experiments with variational fusion weights demonstrated that with this method our robot was capable of accomplishing self-location precisely in real time.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种提高自主移动机器人自定位精度的新策略
研究了自主移动机器人的精确自定位问题。这个问题被表述为一种流形感知算法,这样移动机器人的精确位置是基于与障碍物的距离、周围环境的关键特征或标志以及周围图像的深度来评估的。提出了一种将激光测距得到的局部平面坐标信息与变权深度图像特征表示的空间视觉信息融合的算法,使激光测距的局部距离信息与深度视觉信息相对互补,实现移动机器人位置的精确定位。首先利用EKF算法对激光采集的数据进行粗略定位,然后打开RGB-D相机采集深度图像,提取图像的SURF特征,当特征与训练样例匹配后,使用RANSAC算法检查空间结构的一致性。最后,大量的实验表明,与在激光数据上使用EKF或在深度图像上使用SURF特征匹配的结果相比,我们的融合方法显著提高了定位结果的精度。通过变分融合权值的实验表明,该方法能够实时精确地实现机器人的自定位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An automatic speaker-speech recognition system for friendly HMI based on binary halved clustering A fuzzy clustering algorithm via enhanced spatially constraint for brain MR image segmentation A novel saliency detection framework for infrared thermal images A multistep liver segmentation strategy by combining level set based method with texture analysis for CT images An emotional feedback system based on a regulation process model for happiness improvement
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1