一种高效且稳健的协同单目视觉Slam框架

Dipanjan Das, Soumyadip Maity, B. Dhara
{"title":"一种高效且稳健的协同单目视觉Slam框架","authors":"Dipanjan Das, Soumyadip Maity, B. Dhara","doi":"10.1109/AVSS52988.2021.9663736","DOIUrl":null,"url":null,"abstract":"Visual SLAM (VSLAM) has shown remarkable performance in robot navigation and its practical applicability can be enriched by building a multi-robot collaboration framework called Visual collaborative SLAM (CoSLAM). CoSLAM extends the usage of SLAM for navigating in larger areas for certain applications like inspection etc. using multiple vehicles which not only saves time but also power. Visual CoSLAM framework suffers from problems like i) Robot can start from anywhere in the scene using their own VSLAM which save both time and power ii) making the framework independent of the choice of SLAM for greater applicability of different SLAMs, iii) avoiding collision with other robots by a robust merging of two noisy maps, when the visual overlap is detected. Very few works are available in the literature which addresses the above problems in a single framework in a practical sense. In this paper, we present a framework for CoSLAM using monocular cameras addressing all the above problems. Unlike existing systems which work only on ORB SLAM, our framework is truly independent of SLAMs. We propose a deep learning based algorithm to find out the visually overlapped scene required for merging two or more 3D maps. Our Map Merging is robust in presence of outliers as we compute similarity transforms using both structural information as well as camera-camera relationships and choose one based on a statistical inference. Experimental results show that our framework is robust and works well for any individual SLAM where we demonstrate our result on ORB and EdgeSLAM which are prototypical extremes methods for map merging in a CoSLAM framework.","PeriodicalId":246327,"journal":{"name":"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Efficient And Robust Framework For Collaborative Monocular Visual Slam\",\"authors\":\"Dipanjan Das, Soumyadip Maity, B. Dhara\",\"doi\":\"10.1109/AVSS52988.2021.9663736\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual SLAM (VSLAM) has shown remarkable performance in robot navigation and its practical applicability can be enriched by building a multi-robot collaboration framework called Visual collaborative SLAM (CoSLAM). CoSLAM extends the usage of SLAM for navigating in larger areas for certain applications like inspection etc. using multiple vehicles which not only saves time but also power. Visual CoSLAM framework suffers from problems like i) Robot can start from anywhere in the scene using their own VSLAM which save both time and power ii) making the framework independent of the choice of SLAM for greater applicability of different SLAMs, iii) avoiding collision with other robots by a robust merging of two noisy maps, when the visual overlap is detected. Very few works are available in the literature which addresses the above problems in a single framework in a practical sense. In this paper, we present a framework for CoSLAM using monocular cameras addressing all the above problems. Unlike existing systems which work only on ORB SLAM, our framework is truly independent of SLAMs. We propose a deep learning based algorithm to find out the visually overlapped scene required for merging two or more 3D maps. Our Map Merging is robust in presence of outliers as we compute similarity transforms using both structural information as well as camera-camera relationships and choose one based on a statistical inference. Experimental results show that our framework is robust and works well for any individual SLAM where we demonstrate our result on ORB and EdgeSLAM which are prototypical extremes methods for map merging in a CoSLAM framework.\",\"PeriodicalId\":246327,\"journal\":{\"name\":\"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AVSS52988.2021.9663736\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS52988.2021.9663736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

视觉SLAM (Visual collaborative SLAM, VSLAM)在机器人导航中表现出了显著的性能,通过构建多机器人协作框架视觉协同SLAM (Visual collaborative SLAM, CoSLAM)可以丰富其实际应用。CoSLAM将SLAM的使用扩展到更大的区域,用于某些应用,如检查等,使用多辆车不仅节省了时间,还节省了电力。Visual CoSLAM框架存在以下问题:i)机器人可以使用自己的VSLAM从场景中的任何地方开始,这既节省了时间又节省了功率;ii)使框架独立于SLAM的选择,以提高不同SLAM的适用性;iii)当检测到视觉重叠时,通过鲁棒合并两个噪声地图来避免与其他机器人碰撞。文献中很少有作品在实际意义上以单一框架解决上述问题。在本文中,我们提出了一个使用单目相机的CoSLAM框架来解决上述所有问题。与只在ORB SLAM上工作的现有系统不同,我们的框架真正独立于SLAM。我们提出了一种基于深度学习的算法来找出合并两个或多个3D地图所需的视觉重叠场景。我们的地图合并在异常值存在时是鲁棒的,因为我们使用结构信息和相机-相机关系计算相似性变换,并根据统计推断选择一个。实验结果表明,我们的框架是鲁棒的,适用于任何单独的SLAM,我们在ORB和EdgeSLAM上展示了我们的结果,这是CoSLAM框架中地图合并的典型极端方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Efficient And Robust Framework For Collaborative Monocular Visual Slam
Visual SLAM (VSLAM) has shown remarkable performance in robot navigation and its practical applicability can be enriched by building a multi-robot collaboration framework called Visual collaborative SLAM (CoSLAM). CoSLAM extends the usage of SLAM for navigating in larger areas for certain applications like inspection etc. using multiple vehicles which not only saves time but also power. Visual CoSLAM framework suffers from problems like i) Robot can start from anywhere in the scene using their own VSLAM which save both time and power ii) making the framework independent of the choice of SLAM for greater applicability of different SLAMs, iii) avoiding collision with other robots by a robust merging of two noisy maps, when the visual overlap is detected. Very few works are available in the literature which addresses the above problems in a single framework in a practical sense. In this paper, we present a framework for CoSLAM using monocular cameras addressing all the above problems. Unlike existing systems which work only on ORB SLAM, our framework is truly independent of SLAMs. We propose a deep learning based algorithm to find out the visually overlapped scene required for merging two or more 3D maps. Our Map Merging is robust in presence of outliers as we compute similarity transforms using both structural information as well as camera-camera relationships and choose one based on a statistical inference. Experimental results show that our framework is robust and works well for any individual SLAM where we demonstrate our result on ORB and EdgeSLAM which are prototypical extremes methods for map merging in a CoSLAM framework.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Geometry-Based Person Re-Identification in Fisheye Stereo On the Performance of Crowd-Specific Detectors in Multi-Pedestrian Tracking ARPD: Anchor-free Rotation-aware People Detection using Topview Fisheye Camera A Fire Detection Model Based on Tiny-YOLOv3 with Hyperparameters Improvement A Splittable DNN-Based Object Detector for Edge-Cloud Collaborative Real-Time Video Inference
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1