{"title":"通过将RRT*与强化学习相结合,学习有效地利用成本图","authors":"Riccardo Franceschini, M. Fumagalli, J. Becerra","doi":"10.1109/SSRR56537.2022.10018735","DOIUrl":null,"url":null,"abstract":"Safe autonomous navigation of robots in complex and cluttered environments is a crucial task and is still an open challenge even in 2D environments. Being able to efficiently minimize multiple constraints such as safety or battery drain requires the ability to understand and leverage information from different cost maps. Rapid-exploring random trees (RRT) methods are often used in current path planning methods, thanks to their efficiency in finding a quick path to the goal. However, these approaches suffer from a slow convergence towards an optimal solution, especially when the planner's goal must consider other aspects like safety or battery consumption besides simply achieving the goal. Therefore, it is proposed a sample-efficient and cost-aware sampling RRT* method that can overcome previous methods by exploiting the information gathered from map analysis. In particular, the use of a Reinforcement Learning agent is leveraged to guide the RRT* sampling toward an almost optimal solution. The performance of the proposed method is demonstrated against different RRT* implementations in multiple synthetic environments.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learn to efficiently exploit cost maps by combining RRT* with Reinforcement Learning\",\"authors\":\"Riccardo Franceschini, M. Fumagalli, J. Becerra\",\"doi\":\"10.1109/SSRR56537.2022.10018735\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Safe autonomous navigation of robots in complex and cluttered environments is a crucial task and is still an open challenge even in 2D environments. Being able to efficiently minimize multiple constraints such as safety or battery drain requires the ability to understand and leverage information from different cost maps. Rapid-exploring random trees (RRT) methods are often used in current path planning methods, thanks to their efficiency in finding a quick path to the goal. However, these approaches suffer from a slow convergence towards an optimal solution, especially when the planner's goal must consider other aspects like safety or battery consumption besides simply achieving the goal. Therefore, it is proposed a sample-efficient and cost-aware sampling RRT* method that can overcome previous methods by exploiting the information gathered from map analysis. In particular, the use of a Reinforcement Learning agent is leveraged to guide the RRT* sampling toward an almost optimal solution. The performance of the proposed method is demonstrated against different RRT* implementations in multiple synthetic environments.\",\"PeriodicalId\":272862,\"journal\":{\"name\":\"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)\",\"volume\":\"161 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSRR56537.2022.10018735\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSRR56537.2022.10018735","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learn to efficiently exploit cost maps by combining RRT* with Reinforcement Learning
Safe autonomous navigation of robots in complex and cluttered environments is a crucial task and is still an open challenge even in 2D environments. Being able to efficiently minimize multiple constraints such as safety or battery drain requires the ability to understand and leverage information from different cost maps. Rapid-exploring random trees (RRT) methods are often used in current path planning methods, thanks to their efficiency in finding a quick path to the goal. However, these approaches suffer from a slow convergence towards an optimal solution, especially when the planner's goal must consider other aspects like safety or battery consumption besides simply achieving the goal. Therefore, it is proposed a sample-efficient and cost-aware sampling RRT* method that can overcome previous methods by exploiting the information gathered from map analysis. In particular, the use of a Reinforcement Learning agent is leveraged to guide the RRT* sampling toward an almost optimal solution. The performance of the proposed method is demonstrated against different RRT* implementations in multiple synthetic environments.