基于轻量级语义分割的动态环境视觉里程计量

Richard Josiah C. Tan Ai, Dino Dominic F. Ligutan, Allysa Kate M. Brillantes, Jason L. Española, E. Dadios
{"title":"基于轻量级语义分割的动态环境视觉里程计量","authors":"Richard Josiah C. Tan Ai, Dino Dominic F. Ligutan, Allysa Kate M. Brillantes, Jason L. Española, E. Dadios","doi":"10.1109/HNICEM48295.2019.9073562","DOIUrl":null,"url":null,"abstract":"Visual odometry is the method in which a robot tracks its position and orientation using a sequence of images. Feature based visual odometry matches feature between frames and estimates the pose of the robot according to the matched features. These methods typically assume a static environment and relies on statistical methods such as RANSAC to remove outliers such as moving objects. But in highly dynamic environment where majority of the scene is composed of moving objects these methods fail. This paper proposes to use the feature based visual odometry part of ORB-SLAM2 RGB-D and improve it using DeepLabv3-MobileNetV2 semantic segmentation. The semantic segmentation algorithm is used to segment the image, then extracted feature points that are on pixels of dynamic objects (people) are not tracked. The method is tested on TUM-RGBD dataset. Evaluation shows that the proposed algorithm performs significantly better in dynamic scenes compared to the base algorithm, with reduction in Absolute Trajectory Error (ATE) greater than 92.90% compared to the base algorithm in fr3w_xyz, fr3w_rpy and fr3_half sequences. Additionally, when comparing the algorithm that used DeepLabv3-MobileNetV2 to the computationally intensive DeepLabv3-Xception65, the largest increase in ATE was 27%, while the computation time is 3 times faster.","PeriodicalId":6733,"journal":{"name":"2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM )","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual Odometry in Dynamic Environments using Light Weight Semantic Segmentation\",\"authors\":\"Richard Josiah C. Tan Ai, Dino Dominic F. Ligutan, Allysa Kate M. Brillantes, Jason L. Española, E. Dadios\",\"doi\":\"10.1109/HNICEM48295.2019.9073562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual odometry is the method in which a robot tracks its position and orientation using a sequence of images. Feature based visual odometry matches feature between frames and estimates the pose of the robot according to the matched features. These methods typically assume a static environment and relies on statistical methods such as RANSAC to remove outliers such as moving objects. But in highly dynamic environment where majority of the scene is composed of moving objects these methods fail. This paper proposes to use the feature based visual odometry part of ORB-SLAM2 RGB-D and improve it using DeepLabv3-MobileNetV2 semantic segmentation. The semantic segmentation algorithm is used to segment the image, then extracted feature points that are on pixels of dynamic objects (people) are not tracked. The method is tested on TUM-RGBD dataset. Evaluation shows that the proposed algorithm performs significantly better in dynamic scenes compared to the base algorithm, with reduction in Absolute Trajectory Error (ATE) greater than 92.90% compared to the base algorithm in fr3w_xyz, fr3w_rpy and fr3_half sequences. Additionally, when comparing the algorithm that used DeepLabv3-MobileNetV2 to the computationally intensive DeepLabv3-Xception65, the largest increase in ATE was 27%, while the computation time is 3 times faster.\",\"PeriodicalId\":6733,\"journal\":{\"name\":\"2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM )\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM )\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HNICEM48295.2019.9073562\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM )","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HNICEM48295.2019.9073562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

视觉里程计是机器人使用一系列图像跟踪其位置和方向的方法。基于特征的视觉里程法对帧间的特征进行匹配,并根据匹配的特征估计机器人的姿态。这些方法通常假设一个静态环境,并依赖于RANSAC等统计方法来移除移动对象等异常值。但是在高度动态的环境中,大多数场景都是由移动的物体组成的,这些方法就失效了。本文提出利用ORB-SLAM2 RGB-D中基于特征的视觉里程计量部分,并利用DeepLabv3-MobileNetV2语义分割对其进行改进。采用语义分割算法对图像进行分割,提取的特征点在动态对象(人)的像素上不被跟踪。在TUM-RGBD数据集上对该方法进行了测试。实验结果表明,该算法在动态场景下的性能明显优于基础算法,在fr3w_xyz、fr3w_rpy和fr3_half序列中,绝对轨迹误差(ATE)比基础算法降低了92.90%以上。此外,当将使用DeepLabv3-MobileNetV2的算法与使用计算密集型DeepLabv3-Xception65的算法进行比较时,ATE最大提高了27%,而计算时间提高了3倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Visual Odometry in Dynamic Environments using Light Weight Semantic Segmentation
Visual odometry is the method in which a robot tracks its position and orientation using a sequence of images. Feature based visual odometry matches feature between frames and estimates the pose of the robot according to the matched features. These methods typically assume a static environment and relies on statistical methods such as RANSAC to remove outliers such as moving objects. But in highly dynamic environment where majority of the scene is composed of moving objects these methods fail. This paper proposes to use the feature based visual odometry part of ORB-SLAM2 RGB-D and improve it using DeepLabv3-MobileNetV2 semantic segmentation. The semantic segmentation algorithm is used to segment the image, then extracted feature points that are on pixels of dynamic objects (people) are not tracked. The method is tested on TUM-RGBD dataset. Evaluation shows that the proposed algorithm performs significantly better in dynamic scenes compared to the base algorithm, with reduction in Absolute Trajectory Error (ATE) greater than 92.90% compared to the base algorithm in fr3w_xyz, fr3w_rpy and fr3_half sequences. Additionally, when comparing the algorithm that used DeepLabv3-MobileNetV2 to the computationally intensive DeepLabv3-Xception65, the largest increase in ATE was 27%, while the computation time is 3 times faster.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Innovations on Advanced Transportation Systems for Local Applications An Aquaculture-Based Binary Classifier for Fish Detection using Multilayer Artificial Neural Network Design and Analysis of Hip Joint DOFs for Lower Limb Robotic Exoskeleton Sum of Absolute Difference-based Rate-Distortion Optimization Cost Function for H.265/HEVC Intra-Mode Prediction Optimization and drying kinetics of the convective drying of microalgal biomat (lab-lab)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1