A robust RGB-D visual odometry with moving object detection in dynamic indoor scenes

IF 1.5 Q3 AUTOMATION & CONTROL SYSTEMS IET Cybersystems and Robotics Pub Date : 2023-02-16 DOI:10.1049/csy2.12079
Xianglong Zhang, Haiyang Yu, Yan Zhuang
{"title":"A robust RGB-D visual odometry with moving object detection in dynamic indoor scenes","authors":"Xianglong Zhang,&nbsp;Haiyang Yu,&nbsp;Yan Zhuang","doi":"10.1049/csy2.12079","DOIUrl":null,"url":null,"abstract":"<p>Simultaneous localisation and mapping (SLAM) are the basis for many robotic applications. As the front end of SLAM, visual odometry is mainly used to estimate camera pose. In dynamic scenes, classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results. In order to improve the robustness of visual odometry in dynamic scenes, this paper proposed a dynamic region detection method based on RGB-D images. Firstly, all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively. Meanwhile, the depth image is clustered using the K-Means method. The classified feature points are mapped to the clustered depth image, and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points. Subsequently, a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image, and the feature points covered by the mask are all removed. The remaining static feature points are applied to estimate the camera pose. Finally, some experimental results are provided to demonstrate the feasibility and performance.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"5 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12079","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Cybersystems and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/csy2.12079","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 2

Abstract

Simultaneous localisation and mapping (SLAM) are the basis for many robotic applications. As the front end of SLAM, visual odometry is mainly used to estimate camera pose. In dynamic scenes, classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results. In order to improve the robustness of visual odometry in dynamic scenes, this paper proposed a dynamic region detection method based on RGB-D images. Firstly, all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively. Meanwhile, the depth image is clustered using the K-Means method. The classified feature points are mapped to the clustered depth image, and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points. Subsequently, a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image, and the feature points covered by the mask are all removed. The remaining static feature points are applied to estimate the camera pose. Finally, some experimental results are provided to demonstrate the feasibility and performance.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种在动态室内场景中具有运动物体检测的鲁棒RGB‐D视觉里程计
同时定位和绘图(SLAM)是许多机器人应用的基础。视觉里程计作为SLAM的前端,主要用于估计相机姿态。在动态场景中,经典的方法会受到动态对象的影响,无法得到满意的结果。为了提高视觉里程计在动态场景中的鲁棒性,本文提出了一种基于RGB-D图像的动态区域检测方法。首先,利用三角形约束和极线几何约束对RGB图像上的所有特征点进行动态和静态分类;同时,采用K-Means方法对深度图像进行聚类。将分类后的特征点映射到聚类深度图像上,并根据动态特征点的数量为每个聚类分配动态或静态标签。随后,基于深度图像中的动态聚类生成RGB图像的动态区域掩码,并将掩码覆盖的特征点全部去除。剩余的静态特征点用于估计相机姿态。最后,给出了一些实验结果来验证该方法的可行性和性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IET Cybersystems and Robotics
IET Cybersystems and Robotics Computer Science-Information Systems
CiteScore
3.70
自引率
0.00%
发文量
31
审稿时长
34 weeks
期刊最新文献
Novel vision-LiDAR fusion framework for human action recognition based on dynamic lateral connection Enhancing stability and safety: A novel multi-constraint model predictive control approach for forklift trajectory Automatic feature-based markerless calibration and navigation method for augmented reality assisted dental treatment Big2Small: Learning from masked image modelling with heterogeneous self-supervised knowledge distillation 3D-printed biomimetic and bioinspired soft actuators
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1