Pub Date : 2022-09-19DOI: 10.1109/SSRR56537.2022.10018729
Yves Georgy Daoud, K. Goel, Nathan Michael, Wennie Tabib
This paper develops a methodology for collaborative human-robot exploration that leverages implicit coordination. Most autonomous single- and multi-robot exploration systems require a remote operator to provide explicit guidance to the robotic team. Few works consider how to embed the human partner alongside robots to provide guidance in the field. A remaining challenge for collaborative human-robot exploration is efficient communication of goals from the human to the robot. In this paper we develop a methodology that implicitly communicates a region of interest from a helmet-mounted depth camera on the human's head to the robot and an information gain-based exploration objective that biases motion planning within the viewpoint provided by the human. The result is an aerial system that safely accesses regions of interest that may not be immediately viewable or reachable by the human. The approach is evaluated in simulation and with hardware experiments in a motion capture arena. Videos of the simulation and hardware experiments are available at: https://youtu.be/7jgkBpVFIoE.
{"title":"Collaborative Human-Robot Exploration via Implicit Coordination","authors":"Yves Georgy Daoud, K. Goel, Nathan Michael, Wennie Tabib","doi":"10.1109/SSRR56537.2022.10018729","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018729","url":null,"abstract":"This paper develops a methodology for collaborative human-robot exploration that leverages implicit coordination. Most autonomous single- and multi-robot exploration systems require a remote operator to provide explicit guidance to the robotic team. Few works consider how to embed the human partner alongside robots to provide guidance in the field. A remaining challenge for collaborative human-robot exploration is efficient communication of goals from the human to the robot. In this paper we develop a methodology that implicitly communicates a region of interest from a helmet-mounted depth camera on the human's head to the robot and an information gain-based exploration objective that biases motion planning within the viewpoint provided by the human. The result is an aerial system that safely accesses regions of interest that may not be immediately viewable or reachable by the human. The approach is evaluated in simulation and with hardware experiments in a motion capture arena. Videos of the simulation and hardware experiments are available at: https://youtu.be/7jgkBpVFIoE.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114372812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-17DOI: 10.1109/SSRR56537.2022.10018782
K. Goel, Yves Georgy Daoud, Nathan Michael, Wennie Tabib
This paper improves safe motion primitives-based teleoperation of a multirotor by developing a hierarchical collision avoidance method that modulates maximum speed based on environment complexity and perceptual constraints. Safe speed modulation is challenging in environments that exhibit varying clutter. Existing methods fix maximum speed and map resolution, which prevents vehicles from accessing tight spaces and places the cognitive load for changing speed on the operator. We address these gaps by proposing a high-rate (10 Hz) teleoperation approach that modulates the maximum vehicle speed through hierarchical collision checking. The hierarchical collision checker simultaneously adapts the local map's voxel size and maximum vehicle speed to ensure motion planning safety. The proposed methodology is evaluated in simulation and real-world experiments and compared to a non-adaptive motion primitives-based teleoperation approach. The results demonstrate the advantages of the proposed teleoperation approach both in time taken and the ability to complete the task without requiring the user to specify a maximum vehicle speed.
{"title":"Hierarchical Collision Avoidance for Adaptive-Speed Multirotor Teleoperation","authors":"K. Goel, Yves Georgy Daoud, Nathan Michael, Wennie Tabib","doi":"10.1109/SSRR56537.2022.10018782","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018782","url":null,"abstract":"This paper improves safe motion primitives-based teleoperation of a multirotor by developing a hierarchical collision avoidance method that modulates maximum speed based on environment complexity and perceptual constraints. Safe speed modulation is challenging in environments that exhibit varying clutter. Existing methods fix maximum speed and map resolution, which prevents vehicles from accessing tight spaces and places the cognitive load for changing speed on the operator. We address these gaps by proposing a high-rate (10 Hz) teleoperation approach that modulates the maximum vehicle speed through hierarchical collision checking. The hierarchical collision checker simultaneously adapts the local map's voxel size and maximum vehicle speed to ensure motion planning safety. The proposed methodology is evaluated in simulation and real-world experiments and compared to a non-adaptive motion primitives-based teleoperation approach. The results demonstrate the advantages of the proposed teleoperation approach both in time taken and the ability to complete the task without requiring the user to specify a maximum vehicle speed.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129524982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-03DOI: 10.1109/SSRR56537.2022.10018728
M. Cao, J. Warnke, Yunhai Han, Xinpei Ni, Ye Zhao, S. Coogan
In this paper, we introduce a high-level controller synthesis framework that enables teams of heterogeneous agents to assist each other in resolving environmental conflicts that appear at runtime. This conflict resolution method is built upon temporal-logic-based reactive synthesis to guarantee safety and task completion under specific environment assumptions. In heterogeneous multi-agent systems, every agent is expected to complete its own tasks in service of a global team objective. However, at runtime, an agent may encounter un-modeled obstacles (e.g., doors or walls) that prevent it from achieving its own task. To address this problem, we employ the capabilities of other heterogeneous agents to resolve the obstacle. A controller framework is proposed to redirect agents with the capability of resolving the appropriate obstacles to the required target when such a situation is detected. Three case studies involving a bipedal robot Digit and a quadcopter are used to evaluate the controller performance in action. Additionally, we implement the proposed framework on a physical multi-agent robotic system to demonstrate its viability for real world applications.
{"title":"Leveraging Heterogeneous Capabilities in Multi-Agent Systems for Environmental Conflict Resolution","authors":"M. Cao, J. Warnke, Yunhai Han, Xinpei Ni, Ye Zhao, S. Coogan","doi":"10.1109/SSRR56537.2022.10018728","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018728","url":null,"abstract":"In this paper, we introduce a high-level controller synthesis framework that enables teams of heterogeneous agents to assist each other in resolving environmental conflicts that appear at runtime. This conflict resolution method is built upon temporal-logic-based reactive synthesis to guarantee safety and task completion under specific environment assumptions. In heterogeneous multi-agent systems, every agent is expected to complete its own tasks in service of a global team objective. However, at runtime, an agent may encounter un-modeled obstacles (e.g., doors or walls) that prevent it from achieving its own task. To address this problem, we employ the capabilities of other heterogeneous agents to resolve the obstacle. A controller framework is proposed to redirect agents with the capability of resolving the appropriate obstacles to the required target when such a situation is detected. Three case studies involving a bipedal robot Digit and a quadcopter are used to evaluate the controller performance in action. Additionally, we implement the proposed framework on a physical multi-agent robotic system to demonstrate its viability for real world applications.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115595540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-11DOI: 10.1109/SSRR56537.2022.10018768
Paolo De Petris, Shehryar Khattak, M. Dharmadhikari, Gabriel Waibel, Huan Nguyen, Markus Montenegro, Nikhil Khedekar, K. Alexis, M. Hutter
This work contributes a marsupial robotic system-of-systems involving a legged and an aerial robot capable of collaborative mapping and exploration path planning that exploits the heterogeneous properties of the two systems and the ability to selectively deploy the aerial system from the ground robot. Exploiting the dexterous locomotion capabilities and long endurance of quadruped robots, the marsupial combination can explore within large-scale and confined environments involving rough terrain. However, as certain types of terrain or vertical geometries can render any ground system unable to continue its exploration, the marsupial system can –when needed– deploy the flying robot which, by exploiting its 3D navigation capabilities, can undertake a focused exploration task within its endurance limitations. Focusing on autonomy, the two systems can colocalize and map together by sharing LiDAR-based maps and plan exploration paths individually, while a tailored graph search onboard the legged robot allows it to identify where and when the ferried aerial platform should be deployed. The system is verified within multiple experimental studies demonstrating the expanded exploration capabilities of the marsupial system-of-systems and facilitating the exploration of otherwise individually unreachable areas.
{"title":"Marsupial Walking-and-Flying Robotic Deployment for Collaborative Exploration of Unknown Environments","authors":"Paolo De Petris, Shehryar Khattak, M. Dharmadhikari, Gabriel Waibel, Huan Nguyen, Markus Montenegro, Nikhil Khedekar, K. Alexis, M. Hutter","doi":"10.1109/SSRR56537.2022.10018768","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018768","url":null,"abstract":"This work contributes a marsupial robotic system-of-systems involving a legged and an aerial robot capable of collaborative mapping and exploration path planning that exploits the heterogeneous properties of the two systems and the ability to selectively deploy the aerial system from the ground robot. Exploiting the dexterous locomotion capabilities and long endurance of quadruped robots, the marsupial combination can explore within large-scale and confined environments involving rough terrain. However, as certain types of terrain or vertical geometries can render any ground system unable to continue its exploration, the marsupial system can –when needed– deploy the flying robot which, by exploiting its 3D navigation capabilities, can undertake a focused exploration task within its endurance limitations. Focusing on autonomy, the two systems can colocalize and map together by sharing LiDAR-based maps and plan exploration paths individually, while a tailored graph search onboard the legged robot allows it to identify where and when the ferried aerial platform should be deployed. The system is verified within multiple experimental studies demonstrating the expanded exploration capabilities of the marsupial system-of-systems and facilitating the exploration of otherwise individually unreachable areas.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122427548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intelligent navigation among social crowds is an essential aspect of mobile robotics for applications such as delivery, health care, or assistance. Deep Reinforcement Learning emerged as an alternative planning method to conservative approaches and promises more efficient and flexible navigation. However, in highly dynamic environments employing different kinds of obstacle classes, safe navigation still presents a grand challenge. In this paper, we propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information. In particular, the agent learns object-specific behavior by contemplating the specific danger zones to enhance safety around vulnerable object classes. We tested the approach against a benchmark obstacle avoidance approach and found an increase in safety. Furthermore, we demonstrate that the agent could learn to navigate more safely by keeping an individual safety distance dependent on the semantic information.
{"title":"Enhancing Navigational Safety in Crowded Environments using Semantic-Deep-Reinforcement-Learning-based Navigation","authors":"Linh Kästner, Junhui Li, Zhengcheng Shen, Jens Lambrecht","doi":"10.1109/SSRR56537.2022.10018699","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018699","url":null,"abstract":"Intelligent navigation among social crowds is an essential aspect of mobile robotics for applications such as delivery, health care, or assistance. Deep Reinforcement Learning emerged as an alternative planning method to conservative approaches and promises more efficient and flexible navigation. However, in highly dynamic environments employing different kinds of obstacle classes, safe navigation still presents a grand challenge. In this paper, we propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information. In particular, the agent learns object-specific behavior by contemplating the specific danger zones to enhance safety around vulnerable object classes. We tested the approach against a benchmark obstacle avoidance approach and found an increase in safety. Furthermore, we demonstrate that the agent could learn to navigate more safely by keeping an individual safety distance dependent on the semantic information.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123829587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-18DOI: 10.1109/SSRR56537.2022.10018712
Lifeng Zhou, V. Sharma, Qingbiao Li, A. Prorok, Alejandro Ribeiro, Pratap Tokekar, Vijay R. Kumar
The problem of decentralized multi-robot target tracking asks for jointly selecting actions, e.g., motion primitives, for the robots to maximize target tracking performance with local communications. One major challenge for practical implementations is to make target tracking approaches scalable for large-scale problem instances. In this work, we propose a general-purpose learning architecture towards collaborative target tracking at scale, with decentralized communications. Particularly, our learning architecture leverages a graph neural network (GNN) to capture local interactions of the robots and learns decentralized decision-making for the robots. We train the learning model by imitating an expert solution and implement the resulting model for decentralized action selection involving local observations and communications only. We demonstrate the performance of our GNN-based learning approach in a scenario of active target tracking with large networks of robots. The simulation results show our approach nearly matches the tracking performance of the expert algorithm, and yet runs several orders faster with up to 100 robots. Moreover, it slightly outperforms a decentralized greedy algorithm but runs faster (especially with more than 20 robots). The results also exhibit our approach's generalization capability in previously unseen scenarios, e.g., larger environments and larger networks of robots.
{"title":"Graph Neural Networks for Decentralized Multi-Robot Target Tracking","authors":"Lifeng Zhou, V. Sharma, Qingbiao Li, A. Prorok, Alejandro Ribeiro, Pratap Tokekar, Vijay R. Kumar","doi":"10.1109/SSRR56537.2022.10018712","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018712","url":null,"abstract":"The problem of decentralized multi-robot target tracking asks for jointly selecting actions, e.g., motion primitives, for the robots to maximize target tracking performance with local communications. One major challenge for practical implementations is to make target tracking approaches scalable for large-scale problem instances. In this work, we propose a general-purpose learning architecture towards collaborative target tracking at scale, with decentralized communications. Particularly, our learning architecture leverages a graph neural network (GNN) to capture local interactions of the robots and learns decentralized decision-making for the robots. We train the learning model by imitating an expert solution and implement the resulting model for decentralized action selection involving local observations and communications only. We demonstrate the performance of our GNN-based learning approach in a scenario of active target tracking with large networks of robots. The simulation results show our approach nearly matches the tracking performance of the expert algorithm, and yet runs several orders faster with up to 100 robots. Moreover, it slightly outperforms a decentralized greedy algorithm but runs faster (especially with more than 20 robots). The results also exhibit our approach's generalization capability in previously unseen scenarios, e.g., larger environments and larger networks of robots.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115275049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}