基于视觉的无人潜航器实时姿态估计的比较研究

Cobot Pub Date : 2023-01-30 DOI:10.12688/cobot.17642.1
Ming Li, Ke Yang, J. Qin, J. Zhong, Zipeng Jiang, Qin Su
{"title":"基于视觉的无人潜航器实时姿态估计的比较研究","authors":"Ming Li, Ke Yang, J. Qin, J. Zhong, Zipeng Jiang, Qin Su","doi":"10.12688/cobot.17642.1","DOIUrl":null,"url":null,"abstract":"Background: Navigation and localization are key to the successful execution of autonomous unmanned underwater vehicles (UUVs) in marine environmental monitoring, underwater 3D mapping, and ocean resource surveys. The estimation of the position and the orientation of autonomous UUVs are a long-standing challenging and fundamental problem. As one of the underwater sensors, camera has always been the focus of attention due to its advantages of low cost and rich content information in visibility waters, especially in the fields of visual perception of the underwater environment, target recognition and tracking. At present, the visual real-time pose estimation technology that can be used for UUVs is mainly divided into geometry-based visual positioning algorithms and deep learning-based visual positioning algorithms. Methods: In order to compare the performance of different positioning algorithms and strategies, this paper uses C++ and python, takes the ORB-SLAM3 algorithm and DF-VO algorithm as representatives to conduct a comparative experiment and analysis. Results: The geometry-based algorithm ORB-SLAM3 is less affected by illumination, performs more stably in different underwater environments, and has a shorter calculation time, but its robustness is poor in complex environments. The visual positioning algorithm DF-VO based on deep learning takes longer time to compute, and the positioning accuracy is more easily affected by illumination, especially in dark conditions. However, its robustness is better in unstructured environments such as large-scale image rotation and dynamic object interference. Conclusions: In general, the deep learning-based algorithm is more robust, but multiple deep learning networks make it need more time to compute. The geometry-based method costs less time and is more accurate in low-light and turbid underwater conditions. However, in real underwater situations, these two methods can be connected as binocular vision or methods of multi-sensor combined pose estimation.","PeriodicalId":29807,"journal":{"name":"Cobot","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparative study on real-time pose estimation of vision-based unmanned underwater vehicles\",\"authors\":\"Ming Li, Ke Yang, J. Qin, J. Zhong, Zipeng Jiang, Qin Su\",\"doi\":\"10.12688/cobot.17642.1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background: Navigation and localization are key to the successful execution of autonomous unmanned underwater vehicles (UUVs) in marine environmental monitoring, underwater 3D mapping, and ocean resource surveys. The estimation of the position and the orientation of autonomous UUVs are a long-standing challenging and fundamental problem. As one of the underwater sensors, camera has always been the focus of attention due to its advantages of low cost and rich content information in visibility waters, especially in the fields of visual perception of the underwater environment, target recognition and tracking. At present, the visual real-time pose estimation technology that can be used for UUVs is mainly divided into geometry-based visual positioning algorithms and deep learning-based visual positioning algorithms. Methods: In order to compare the performance of different positioning algorithms and strategies, this paper uses C++ and python, takes the ORB-SLAM3 algorithm and DF-VO algorithm as representatives to conduct a comparative experiment and analysis. Results: The geometry-based algorithm ORB-SLAM3 is less affected by illumination, performs more stably in different underwater environments, and has a shorter calculation time, but its robustness is poor in complex environments. The visual positioning algorithm DF-VO based on deep learning takes longer time to compute, and the positioning accuracy is more easily affected by illumination, especially in dark conditions. However, its robustness is better in unstructured environments such as large-scale image rotation and dynamic object interference. Conclusions: In general, the deep learning-based algorithm is more robust, but multiple deep learning networks make it need more time to compute. The geometry-based method costs less time and is more accurate in low-light and turbid underwater conditions. However, in real underwater situations, these two methods can be connected as binocular vision or methods of multi-sensor combined pose estimation.\",\"PeriodicalId\":29807,\"journal\":{\"name\":\"Cobot\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cobot\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12688/cobot.17642.1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cobot","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12688/cobot.17642.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:导航和定位是无人潜航器在海洋环境监测、水下三维测绘和海洋资源调查中成功实施的关键。无人潜航器的位置和方向估计是一个长期存在的具有挑战性的基本问题。相机作为水下传感器之一,由于其在能见度水域成本低、内容信息丰富的优点,尤其是在水下环境的视觉感知、目标识别和跟踪等领域,一直是人们关注的焦点。目前,可用于无人潜航器的视觉实时姿态估计技术主要分为基于几何的视觉定位算法和基于深度学习的视觉定位方法。方法:为了比较不同定位算法和策略的性能,本文使用C++和python,以ORB-SLAM3算法和DF-VO算法为代表进行对比实验和分析。结果:基于几何的算法ORB-SLAM3受光照影响较小,在不同的水下环境中表现更稳定,计算时间更短,但在复杂环境中鲁棒性较差。基于深度学习的视觉定位算法DF-VO计算时间更长,定位精度更容易受到光照的影响,尤其是在黑暗条件下。然而,在大规模图像旋转和动态对象干涉等非结构化环境中,它的鲁棒性更好。结论:一般来说,基于深度学习的算法更稳健,但多个深度学习网络使其需要更多的计算时间。基于几何的方法花费更少的时间,并且在弱光和浑浊的水下条件下更准确。然而,在真实的水下情况下,这两种方法可以连接为双目视觉或多传感器组合姿态估计方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Comparative study on real-time pose estimation of vision-based unmanned underwater vehicles
Background: Navigation and localization are key to the successful execution of autonomous unmanned underwater vehicles (UUVs) in marine environmental monitoring, underwater 3D mapping, and ocean resource surveys. The estimation of the position and the orientation of autonomous UUVs are a long-standing challenging and fundamental problem. As one of the underwater sensors, camera has always been the focus of attention due to its advantages of low cost and rich content information in visibility waters, especially in the fields of visual perception of the underwater environment, target recognition and tracking. At present, the visual real-time pose estimation technology that can be used for UUVs is mainly divided into geometry-based visual positioning algorithms and deep learning-based visual positioning algorithms. Methods: In order to compare the performance of different positioning algorithms and strategies, this paper uses C++ and python, takes the ORB-SLAM3 algorithm and DF-VO algorithm as representatives to conduct a comparative experiment and analysis. Results: The geometry-based algorithm ORB-SLAM3 is less affected by illumination, performs more stably in different underwater environments, and has a shorter calculation time, but its robustness is poor in complex environments. The visual positioning algorithm DF-VO based on deep learning takes longer time to compute, and the positioning accuracy is more easily affected by illumination, especially in dark conditions. However, its robustness is better in unstructured environments such as large-scale image rotation and dynamic object interference. Conclusions: In general, the deep learning-based algorithm is more robust, but multiple deep learning networks make it need more time to compute. The geometry-based method costs less time and is more accurate in low-light and turbid underwater conditions. However, in real underwater situations, these two methods can be connected as binocular vision or methods of multi-sensor combined pose estimation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cobot
Cobot collaborative robots-
自引率
0.00%
发文量
0
期刊介绍: Cobot is a rapid multidisciplinary open access publishing platform for research focused on the interdisciplinary field of collaborative robots. The aim of Cobot is to enhance knowledge and share the results of the latest innovative technologies for the technicians, researchers and experts engaged in collaborative robot research. The platform will welcome submissions in all areas of scientific and technical research related to collaborative robots, and all articles will benefit from open peer review. The scope of Cobot includes, but is not limited to: ● Intelligent robots ● Artificial intelligence ● Human-machine collaboration and integration ● Machine vision ● Intelligent sensing ● Smart materials ● Design, development and testing of collaborative robots ● Software for cobots ● Industrial applications of cobots ● Service applications of cobots ● Medical and health applications of cobots ● Educational applications of cobots As well as research articles and case studies, Cobot accepts a variety of article types including method articles, study protocols, software tools, systematic reviews, data notes, brief reports, and opinion articles.
期刊最新文献
Load torque observation and compensation for permanent magnet synchronous motor based on sliding mode observer Design and optimization of soft colonoscopy robot with variable cross section Robot-assisted homecare for older adults: A user study on needs and challenges Machine vision-based automatic focusing method for robot laser welding system A dynamic obstacle avoidance method for collaborative robots based on trajectory optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1