{"title":"多凸轮臂slam:基于截断签名距离函数的移动救援机器人鲁棒多模态估计","authors":"Jasper Süß, Marius Schnaubelt, O. Stryk","doi":"10.1109/SSRR56537.2022.10018752","DOIUrl":null,"url":null,"abstract":"To be able to perform manipulation tasks within an unknown environment, rescue robots require a detailed model of their surroundings, which is often generated using registered depth images as an input. However, erroneous camera registrations due to noisy motor encoder readings, a faulty kinematic model or other error sources can drastically reduce the model quality. Most existing approaches register the pose of a free-floating single camera without considering constraints by the kinematic robot configuration. In contrast, ARM-SLAM [1] performs dense localization and mapping in the configuration space of the robot arm, implicitly tracking the pose of a single camera and creating a volumetric model. However, using a single camera only allows to cover a small field of view and can only constrain up to six degrees of freedom. Therefore, we propose the Multi-Cam ARM-SLAM (MC-ARM-SLAM) framework, which fuses information of multiple depth cameras mounted on the robot into a joint model. The use of multiple cameras allows to also estimate the motion of the robot base that is modeled as a virtual kinematic chain additionally to the motion of the arm. Furthermore, we use a robust bivariate error formulation, which helps to boost the accuracy of the method and mitigates the influence of outliers. The proposed method is extensively evaluated in simulation and on a real rescue robot. It is shown that the method is able to correct errors in the motor encoders and the kinematic model and outperforms the base version of ARM-SLAM.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Cam ARM-SLAM: Robust Multi-Modal State Estimation Using Truncated Signed Distance Functions for Mobile Rescue Robots\",\"authors\":\"Jasper Süß, Marius Schnaubelt, O. Stryk\",\"doi\":\"10.1109/SSRR56537.2022.10018752\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To be able to perform manipulation tasks within an unknown environment, rescue robots require a detailed model of their surroundings, which is often generated using registered depth images as an input. However, erroneous camera registrations due to noisy motor encoder readings, a faulty kinematic model or other error sources can drastically reduce the model quality. Most existing approaches register the pose of a free-floating single camera without considering constraints by the kinematic robot configuration. In contrast, ARM-SLAM [1] performs dense localization and mapping in the configuration space of the robot arm, implicitly tracking the pose of a single camera and creating a volumetric model. However, using a single camera only allows to cover a small field of view and can only constrain up to six degrees of freedom. Therefore, we propose the Multi-Cam ARM-SLAM (MC-ARM-SLAM) framework, which fuses information of multiple depth cameras mounted on the robot into a joint model. The use of multiple cameras allows to also estimate the motion of the robot base that is modeled as a virtual kinematic chain additionally to the motion of the arm. Furthermore, we use a robust bivariate error formulation, which helps to boost the accuracy of the method and mitigates the influence of outliers. The proposed method is extensively evaluated in simulation and on a real rescue robot. It is shown that the method is able to correct errors in the motor encoders and the kinematic model and outperforms the base version of ARM-SLAM.\",\"PeriodicalId\":272862,\"journal\":{\"name\":\"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSRR56537.2022.10018752\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSRR56537.2022.10018752","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-Cam ARM-SLAM: Robust Multi-Modal State Estimation Using Truncated Signed Distance Functions for Mobile Rescue Robots
To be able to perform manipulation tasks within an unknown environment, rescue robots require a detailed model of their surroundings, which is often generated using registered depth images as an input. However, erroneous camera registrations due to noisy motor encoder readings, a faulty kinematic model or other error sources can drastically reduce the model quality. Most existing approaches register the pose of a free-floating single camera without considering constraints by the kinematic robot configuration. In contrast, ARM-SLAM [1] performs dense localization and mapping in the configuration space of the robot arm, implicitly tracking the pose of a single camera and creating a volumetric model. However, using a single camera only allows to cover a small field of view and can only constrain up to six degrees of freedom. Therefore, we propose the Multi-Cam ARM-SLAM (MC-ARM-SLAM) framework, which fuses information of multiple depth cameras mounted on the robot into a joint model. The use of multiple cameras allows to also estimate the motion of the robot base that is modeled as a virtual kinematic chain additionally to the motion of the arm. Furthermore, we use a robust bivariate error formulation, which helps to boost the accuracy of the method and mitigates the influence of outliers. The proposed method is extensively evaluated in simulation and on a real rescue robot. It is shown that the method is able to correct errors in the motor encoders and the kinematic model and outperforms the base version of ARM-SLAM.