Pu Feng;Rongye Shi;Size Wang;Junkang Liang;Xin Yu;Simin Li;Wenjun Wu
{"title":"利用物理信息强化学习实现安全高效的多代理碰撞规避","authors":"Pu Feng;Rongye Shi;Size Wang;Junkang Liang;Xin Yu;Simin Li;Wenjun Wu","doi":"10.1109/LRA.2024.3487491","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) has shown great promise in addressing multi-agent collision avoidance challenges. However, existing RL-based methods often suffer from low training efficiency and poor action safety. To tackle these issues, we introduce a physics-informed reinforcement learning framework equipped with two modules: a Potential Field (PF) module and a Multi-Agent Multi-Level Safety (MAMLS) module. The PF module uses the Artificial Potential Field method to compute a regularization loss, adaptively integrating it into the critic's loss to enhance training efficiency. The MAMLS module formulates action safety as a constrained optimization problem, deriving safe actions by solving this optimization. Furthermore, to better address the characteristics of multi-agent collision avoidance tasks, multi-agent multi-level constraints are introduced. The results of simulations and real-world experiments showed that our physics-informed framework offers a significant improvement in terms of both the efficiency of training and safety-related metrics over advanced baseline methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"9 12","pages":"11138-11145"},"PeriodicalIF":4.6000,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Safe and Efficient Multi-Agent Collision Avoidance With Physics-Informed Reinforcement Learning\",\"authors\":\"Pu Feng;Rongye Shi;Size Wang;Junkang Liang;Xin Yu;Simin Li;Wenjun Wu\",\"doi\":\"10.1109/LRA.2024.3487491\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning (RL) has shown great promise in addressing multi-agent collision avoidance challenges. However, existing RL-based methods often suffer from low training efficiency and poor action safety. To tackle these issues, we introduce a physics-informed reinforcement learning framework equipped with two modules: a Potential Field (PF) module and a Multi-Agent Multi-Level Safety (MAMLS) module. The PF module uses the Artificial Potential Field method to compute a regularization loss, adaptively integrating it into the critic's loss to enhance training efficiency. The MAMLS module formulates action safety as a constrained optimization problem, deriving safe actions by solving this optimization. Furthermore, to better address the characteristics of multi-agent collision avoidance tasks, multi-agent multi-level constraints are introduced. The results of simulations and real-world experiments showed that our physics-informed framework offers a significant improvement in terms of both the efficiency of training and safety-related metrics over advanced baseline methods.\",\"PeriodicalId\":13241,\"journal\":{\"name\":\"IEEE Robotics and Automation Letters\",\"volume\":\"9 12\",\"pages\":\"11138-11145\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Robotics and Automation Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10737374/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10737374/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
Safe and Efficient Multi-Agent Collision Avoidance With Physics-Informed Reinforcement Learning
Reinforcement learning (RL) has shown great promise in addressing multi-agent collision avoidance challenges. However, existing RL-based methods often suffer from low training efficiency and poor action safety. To tackle these issues, we introduce a physics-informed reinforcement learning framework equipped with two modules: a Potential Field (PF) module and a Multi-Agent Multi-Level Safety (MAMLS) module. The PF module uses the Artificial Potential Field method to compute a regularization loss, adaptively integrating it into the critic's loss to enhance training efficiency. The MAMLS module formulates action safety as a constrained optimization problem, deriving safe actions by solving this optimization. Furthermore, to better address the characteristics of multi-agent collision avoidance tasks, multi-agent multi-level constraints are introduced. The results of simulations and real-world experiments showed that our physics-informed framework offers a significant improvement in terms of both the efficiency of training and safety-related metrics over advanced baseline methods.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.