Pose Estimation of Mobile Robot Using Image and Point-Cloud Data

IF 1.6 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Journal of Electrical Engineering & Technology Pub Date : 2024-09-04 DOI:10.1007/s42835-024-02030-3
Sung Won An, Hong Seong Park
{"title":"Pose Estimation of Mobile Robot Using Image and Point-Cloud Data","authors":"Sung Won An, Hong Seong Park","doi":"10.1007/s42835-024-02030-3","DOIUrl":null,"url":null,"abstract":"<p>In Simultaneous Localization and Mapping (SLAM) techniques, the precise estimation of the initial pose of a mobile robot presents a significant challenge. The initial pose is crucial as it can significantly reduce the accumulated errors in SLAM. Despite various advancements in pose estimation, accurately determining the initial pose in different application scenarios continues to be a complex task, often requiring techniques such as loop closure. However, loop closure detection may not always be feasible, as it is time-consuming and relies on the robot revisiting previous paths or its initial starting position. In essence, most localization methods such as SLAM critically require the initial pose to estimate the mobile robot’s pose and the more precise initial pose makes the localization process to converge quickly. Addressing these persistent challenges, this paper proposes a novel method that utilizes both image and point cloud data, allowing for easy adaptation across diverse and dynamic environments. The method integrates well-known technologies such as NetVLAD, RootSIFT, 5-Point, and Iterative Closest Point (ICP) algorithms. This approach not only addresses the initial pose estimation problem but also provides an alternative to existing landmarks, enhancing adaptability to diverse and dynamic environments. NetVLAD is utilized to find the most similar image data in stored images by comparing the image captured by the mobile robot with the stored images with pose data. The relative pose is estimated by applying the RootSIFT and 5-Point algorithm, and ICP algorithm to the found image and point cloud data, respectively. This method determines the final pose of the mobile robot by combining each relative pose extracted from image data and point cloud data through weighted integration. The effectiveness of the proposed method is verified by comparing it with existing deep learning-based pose estimation methods. This method can accurately estimate poses, including the initial pose, using much less data than existing deep learning methods, even in diverse and dynamic environments. Furthermore, this method is applicable not only when using image data alone but also when both image and point cloud data are available.</p>","PeriodicalId":15577,"journal":{"name":"Journal of Electrical Engineering & Technology","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electrical Engineering & Technology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s42835-024-02030-3","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

In Simultaneous Localization and Mapping (SLAM) techniques, the precise estimation of the initial pose of a mobile robot presents a significant challenge. The initial pose is crucial as it can significantly reduce the accumulated errors in SLAM. Despite various advancements in pose estimation, accurately determining the initial pose in different application scenarios continues to be a complex task, often requiring techniques such as loop closure. However, loop closure detection may not always be feasible, as it is time-consuming and relies on the robot revisiting previous paths or its initial starting position. In essence, most localization methods such as SLAM critically require the initial pose to estimate the mobile robot’s pose and the more precise initial pose makes the localization process to converge quickly. Addressing these persistent challenges, this paper proposes a novel method that utilizes both image and point cloud data, allowing for easy adaptation across diverse and dynamic environments. The method integrates well-known technologies such as NetVLAD, RootSIFT, 5-Point, and Iterative Closest Point (ICP) algorithms. This approach not only addresses the initial pose estimation problem but also provides an alternative to existing landmarks, enhancing adaptability to diverse and dynamic environments. NetVLAD is utilized to find the most similar image data in stored images by comparing the image captured by the mobile robot with the stored images with pose data. The relative pose is estimated by applying the RootSIFT and 5-Point algorithm, and ICP algorithm to the found image and point cloud data, respectively. This method determines the final pose of the mobile robot by combining each relative pose extracted from image data and point cloud data through weighted integration. The effectiveness of the proposed method is verified by comparing it with existing deep learning-based pose estimation methods. This method can accurately estimate poses, including the initial pose, using much less data than existing deep learning methods, even in diverse and dynamic environments. Furthermore, this method is applicable not only when using image data alone but also when both image and point cloud data are available.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用图像和点云数据估计移动机器人的姿态
在同步定位和绘图(SLAM)技术中,移动机器人初始姿态的精确估算是一项重大挑战。初始姿态至关重要,因为它能显著减少 SLAM 中的累积误差。尽管在姿态估计方面取得了各种进步,但在不同的应用场景中准确确定初始姿态仍然是一项复杂的任务,通常需要采用闭环等技术。然而,环路闭合检测并不总是可行的,因为它不仅耗时,而且依赖于机器人重访之前的路径或初始起始位置。从本质上讲,大多数定位方法(如 SLAM)都需要通过初始姿态来估计移动机器人的姿态,而更精确的初始姿态能使定位过程快速收敛。为了应对这些长期存在的挑战,本文提出了一种利用图像和点云数据的新方法,可轻松适应多样化的动态环境。该方法集成了 NetVLAD、RootSIFT、5-Point 和迭代最接近点 (ICP) 算法等著名技术。这种方法不仅能解决初始姿态估计问题,还能替代现有地标,增强对多样化动态环境的适应性。通过比较移动机器人捕捉到的图像和带有姿态数据的存储图像,NetVLAD 可用于查找存储图像中最相似的图像数据。通过对找到的图像和点云数据分别应用 RootSIFT 和 5 点算法以及 ICP 算法来估算相对姿态。该方法通过加权积分将从图像数据和点云数据中提取的每个相对姿态结合起来,从而确定移动机器人的最终姿态。通过与现有的基于深度学习的姿势估计方法进行比较,验证了所提方法的有效性。与现有的深度学习方法相比,即使在多样化的动态环境中,该方法也能使用更少的数据准确估计姿势,包括初始姿势。此外,该方法不仅适用于单独使用图像数据的情况,也适用于同时拥有图像和点云数据的情况。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Electrical Engineering & Technology
Journal of Electrical Engineering & Technology ENGINEERING, ELECTRICAL & ELECTRONIC-
CiteScore
4.00
自引率
15.80%
发文量
321
审稿时长
3.8 months
期刊介绍: ournal of Electrical Engineering and Technology (JEET), which is the official publication of the Korean Institute of Electrical Engineers (KIEE) being published bimonthly, released the first issue in March 2006.The journal is open to submission from scholars and experts in the wide areas of electrical engineering technologies. The scope of the journal includes all issues in the field of Electrical Engineering and Technology. Included are techniques for electrical power engineering, electrical machinery and energy conversion systems, electrophysics and applications, information and controls.
期刊最新文献
Parameter Solution of Fractional Order PID Controller for Home Ventilator Based on Genetic-Ant Colony Algorithm Fault Detection of Flexible DC Grid Based on Empirical Wavelet Transform and WOA-CNN Aggregation and Bidding Strategy of Virtual Power Plant Power Management of Hybrid System Using Coronavirus Herd Immunity Optimizer Algorithm A Review on Power System Security Issues in the High Renewable Energy Penetration Environment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1