多凸轮臂slam:基于截断签名距离函数的移动救援机器人鲁棒多模态估计

Jasper Süß, Marius Schnaubelt, O. Stryk
{"title":"多凸轮臂slam:基于截断签名距离函数的移动救援机器人鲁棒多模态估计","authors":"Jasper Süß, Marius Schnaubelt, O. Stryk","doi":"10.1109/SSRR56537.2022.10018752","DOIUrl":null,"url":null,"abstract":"To be able to perform manipulation tasks within an unknown environment, rescue robots require a detailed model of their surroundings, which is often generated using registered depth images as an input. However, erroneous camera registrations due to noisy motor encoder readings, a faulty kinematic model or other error sources can drastically reduce the model quality. Most existing approaches register the pose of a free-floating single camera without considering constraints by the kinematic robot configuration. In contrast, ARM-SLAM [1] performs dense localization and mapping in the configuration space of the robot arm, implicitly tracking the pose of a single camera and creating a volumetric model. However, using a single camera only allows to cover a small field of view and can only constrain up to six degrees of freedom. Therefore, we propose the Multi-Cam ARM-SLAM (MC-ARM-SLAM) framework, which fuses information of multiple depth cameras mounted on the robot into a joint model. The use of multiple cameras allows to also estimate the motion of the robot base that is modeled as a virtual kinematic chain additionally to the motion of the arm. Furthermore, we use a robust bivariate error formulation, which helps to boost the accuracy of the method and mitigates the influence of outliers. The proposed method is extensively evaluated in simulation and on a real rescue robot. It is shown that the method is able to correct errors in the motor encoders and the kinematic model and outperforms the base version of ARM-SLAM.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Cam ARM-SLAM: Robust Multi-Modal State Estimation Using Truncated Signed Distance Functions for Mobile Rescue Robots\",\"authors\":\"Jasper Süß, Marius Schnaubelt, O. Stryk\",\"doi\":\"10.1109/SSRR56537.2022.10018752\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To be able to perform manipulation tasks within an unknown environment, rescue robots require a detailed model of their surroundings, which is often generated using registered depth images as an input. However, erroneous camera registrations due to noisy motor encoder readings, a faulty kinematic model or other error sources can drastically reduce the model quality. Most existing approaches register the pose of a free-floating single camera without considering constraints by the kinematic robot configuration. In contrast, ARM-SLAM [1] performs dense localization and mapping in the configuration space of the robot arm, implicitly tracking the pose of a single camera and creating a volumetric model. However, using a single camera only allows to cover a small field of view and can only constrain up to six degrees of freedom. Therefore, we propose the Multi-Cam ARM-SLAM (MC-ARM-SLAM) framework, which fuses information of multiple depth cameras mounted on the robot into a joint model. The use of multiple cameras allows to also estimate the motion of the robot base that is modeled as a virtual kinematic chain additionally to the motion of the arm. Furthermore, we use a robust bivariate error formulation, which helps to boost the accuracy of the method and mitigates the influence of outliers. The proposed method is extensively evaluated in simulation and on a real rescue robot. It is shown that the method is able to correct errors in the motor encoders and the kinematic model and outperforms the base version of ARM-SLAM.\",\"PeriodicalId\":272862,\"journal\":{\"name\":\"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSRR56537.2022.10018752\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSRR56537.2022.10018752","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

为了能够在未知环境中执行操作任务,救援机器人需要一个周围环境的详细模型,这通常是使用注册深度图像作为输入生成的。然而,由于噪声电机编码器读数,错误的运动学模型或其他误差源导致的错误相机配准会大大降低模型质量。大多数现有的方法在不考虑机器人运动学配置约束的情况下记录了自由漂浮的单个摄像机的姿态。相比之下,arm - slam[1]在机械臂的构型空间中进行密集定位和映射,隐式跟踪单个摄像机的姿态并创建体积模型。然而,使用单个相机只能覆盖很小的视野,并且只能限制最多6个自由度。因此,我们提出了Multi-Cam ARM-SLAM (MC-ARM-SLAM)框架,该框架将安装在机器人上的多个深度摄像头信息融合到一个关节模型中。多个摄像头的使用还可以估计机器人基座的运动,除了手臂的运动之外,机器人基座的运动还被建模为虚拟运动链。此外,我们使用稳健的二元误差公式,这有助于提高方法的准确性并减轻异常值的影响。该方法在仿真和实际救援机器人上得到了广泛的评价。结果表明,该方法能够修正电机编码器和运动模型的误差,优于ARM-SLAM的基本版本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-Cam ARM-SLAM: Robust Multi-Modal State Estimation Using Truncated Signed Distance Functions for Mobile Rescue Robots
To be able to perform manipulation tasks within an unknown environment, rescue robots require a detailed model of their surroundings, which is often generated using registered depth images as an input. However, erroneous camera registrations due to noisy motor encoder readings, a faulty kinematic model or other error sources can drastically reduce the model quality. Most existing approaches register the pose of a free-floating single camera without considering constraints by the kinematic robot configuration. In contrast, ARM-SLAM [1] performs dense localization and mapping in the configuration space of the robot arm, implicitly tracking the pose of a single camera and creating a volumetric model. However, using a single camera only allows to cover a small field of view and can only constrain up to six degrees of freedom. Therefore, we propose the Multi-Cam ARM-SLAM (MC-ARM-SLAM) framework, which fuses information of multiple depth cameras mounted on the robot into a joint model. The use of multiple cameras allows to also estimate the motion of the robot base that is modeled as a virtual kinematic chain additionally to the motion of the arm. Furthermore, we use a robust bivariate error formulation, which helps to boost the accuracy of the method and mitigates the influence of outliers. The proposed method is extensively evaluated in simulation and on a real rescue robot. It is shown that the method is able to correct errors in the motor encoders and the kinematic model and outperforms the base version of ARM-SLAM.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Autonomous Human Navigation Using Wearable Multiple Laser Projection Suit An innovative pick-up and transport robot system for casualty evacuation DynaBARN: Benchmarking Metric Ground Navigation in Dynamic Environments Multi-Robot System for Autonomous Cooperative Counter-UAS Missions: Design, Integration, and Field Testing Autonomous Robotic Map Refinement for Targeted Resolution and Local Accuracy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1