Wangtao Lu;Lei Chen;Yunkai Wang;Yufei Wei;Zifei Wu;Rong Xiong;Yue Wang
{"title":"Demonstration Data-Driven Parameter Adjustment for Trajectory Planning in Highly Constrained Environments","authors":"Wangtao Lu;Lei Chen;Yunkai Wang;Yufei Wei;Zifei Wu;Rong Xiong;Yue Wang","doi":"10.1109/LRA.2024.3495454","DOIUrl":null,"url":null,"abstract":"Trajectory planning in highly constrained environments is crucial for robotic navigation. Classical algorithms are widely used for their interpretability, generalization, and system robustness. However, these algorithms often require parameter retuning when adapting to new scenarios. To address this issue, we propose a demonstration data-driven reinforcement learning (RL) method for automatic parameter adjustment. Our approach includes two main components: a front-end policy network and a back-end asynchronous controller. The policy network selects appropriate parameters for the trajectory planner, while a discriminator in a Conditional Generative Adversarial Network (CGAN) evaluates the planned trajectory, using this evaluation as an imitation reward in RL. The asynchronous controller is employed for high-frequency trajectory tracking. Experiments conducted in both simulation and real-world demonstrate that our proposed method significantly enhances the performance of classical algorithms.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"9 12","pages":"11641-11648"},"PeriodicalIF":4.6000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10749994/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Trajectory planning in highly constrained environments is crucial for robotic navigation. Classical algorithms are widely used for their interpretability, generalization, and system robustness. However, these algorithms often require parameter retuning when adapting to new scenarios. To address this issue, we propose a demonstration data-driven reinforcement learning (RL) method for automatic parameter adjustment. Our approach includes two main components: a front-end policy network and a back-end asynchronous controller. The policy network selects appropriate parameters for the trajectory planner, while a discriminator in a Conditional Generative Adversarial Network (CGAN) evaluates the planned trajectory, using this evaluation as an imitation reward in RL. The asynchronous controller is employed for high-frequency trajectory tracking. Experiments conducted in both simulation and real-world demonstrate that our proposed method significantly enhances the performance of classical algorithms.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.