{"title":"基于深度强化学习的动态环境下多机器人协同导航","authors":"Ruihua Han, Shengduo Chen, Qi Hao","doi":"10.1109/ICRA40945.2020.9197209","DOIUrl":null,"url":null,"abstract":"The challenges of multi-robot navigation in dynamic environments lie in uncertainties in obstacle complexities, partially observation of robots, and policy implementation from simulations to the real world. This paper presents a cooperative approach to address the multi-robot navigation problem (MRNP) under dynamic environments using a deep reinforcement learning (DRL) framework, which can help multiple robots jointly achieve optimal paths despite a certain degree of obstacle complexities. The novelty of this work includes threefold: (1) developing a cooperative architecture that robots can exchange information with each other to select the optimal target locations; (2) developing a DRL based framework which can learn a navigation policy to generate the optimal paths for multiple robots; (3) developing a training mechanism based on dynamics randomization which can make the policy generalized and achieve the maximum performance in the real world. The method is tested with Gazebo simulations and 4 differential drive robots. Both simulation and experiment results validate the superior performance of the proposed method in terms of success rate and travel time when compared with the other state-of-art technologies.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"21 1","pages":"448-454"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Cooperative Multi-Robot Navigation in Dynamic Environment with Deep Reinforcement Learning\",\"authors\":\"Ruihua Han, Shengduo Chen, Qi Hao\",\"doi\":\"10.1109/ICRA40945.2020.9197209\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The challenges of multi-robot navigation in dynamic environments lie in uncertainties in obstacle complexities, partially observation of robots, and policy implementation from simulations to the real world. This paper presents a cooperative approach to address the multi-robot navigation problem (MRNP) under dynamic environments using a deep reinforcement learning (DRL) framework, which can help multiple robots jointly achieve optimal paths despite a certain degree of obstacle complexities. The novelty of this work includes threefold: (1) developing a cooperative architecture that robots can exchange information with each other to select the optimal target locations; (2) developing a DRL based framework which can learn a navigation policy to generate the optimal paths for multiple robots; (3) developing a training mechanism based on dynamics randomization which can make the policy generalized and achieve the maximum performance in the real world. The method is tested with Gazebo simulations and 4 differential drive robots. Both simulation and experiment results validate the superior performance of the proposed method in terms of success rate and travel time when compared with the other state-of-art technologies.\",\"PeriodicalId\":6859,\"journal\":{\"name\":\"2020 IEEE International Conference on Robotics and Automation (ICRA)\",\"volume\":\"21 1\",\"pages\":\"448-454\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA40945.2020.9197209\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA40945.2020.9197209","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cooperative Multi-Robot Navigation in Dynamic Environment with Deep Reinforcement Learning
The challenges of multi-robot navigation in dynamic environments lie in uncertainties in obstacle complexities, partially observation of robots, and policy implementation from simulations to the real world. This paper presents a cooperative approach to address the multi-robot navigation problem (MRNP) under dynamic environments using a deep reinforcement learning (DRL) framework, which can help multiple robots jointly achieve optimal paths despite a certain degree of obstacle complexities. The novelty of this work includes threefold: (1) developing a cooperative architecture that robots can exchange information with each other to select the optimal target locations; (2) developing a DRL based framework which can learn a navigation policy to generate the optimal paths for multiple robots; (3) developing a training mechanism based on dynamics randomization which can make the policy generalized and achieve the maximum performance in the real world. The method is tested with Gazebo simulations and 4 differential drive robots. Both simulation and experiment results validate the superior performance of the proposed method in terms of success rate and travel time when compared with the other state-of-art technologies.