Tristan Schneider;Matheus V. A. Pedrosa;Timo P. Gros;Verena Wolf;Kathrin Flaßkamp
{"title":"将运动原型作为深度 Q 学习的动作空间,用于自动驾驶规划","authors":"Tristan Schneider;Matheus V. A. Pedrosa;Timo P. Gros;Verena Wolf;Kathrin Flaßkamp","doi":"10.1109/TITS.2024.3436530","DOIUrl":null,"url":null,"abstract":"Motion planning for autonomous vehicles is commonly implemented via graph-search methods, which pose limitations to the model accuracy and environmental complexity that can be handled under real-time constraints. In contrast, reinforcement learning, specifically the deep Q-learning (DQL) algorithm, provides an interesting alternative for real-time solutions. Some approaches, such as the deep Q-network (DQN), model the RL-action space by quantizing the continuous control inputs. Here, we propose to use motion primitives, which encode continuous-time nonlinear system behavior as the action space. The novel methodology of motion primitives-DQL planning is evaluated in a numerical example using a single-track vehicle model and different planning scenarios. We show that our approach outperforms a state-of-the-art graph-search method in computation time and probability of reaching the goal.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 11","pages":"17852-17864"},"PeriodicalIF":7.9000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Motion Primitives as the Action Space of Deep Q-Learning for Planning in Autonomous Driving\",\"authors\":\"Tristan Schneider;Matheus V. A. Pedrosa;Timo P. Gros;Verena Wolf;Kathrin Flaßkamp\",\"doi\":\"10.1109/TITS.2024.3436530\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Motion planning for autonomous vehicles is commonly implemented via graph-search methods, which pose limitations to the model accuracy and environmental complexity that can be handled under real-time constraints. In contrast, reinforcement learning, specifically the deep Q-learning (DQL) algorithm, provides an interesting alternative for real-time solutions. Some approaches, such as the deep Q-network (DQN), model the RL-action space by quantizing the continuous control inputs. Here, we propose to use motion primitives, which encode continuous-time nonlinear system behavior as the action space. The novel methodology of motion primitives-DQL planning is evaluated in a numerical example using a single-track vehicle model and different planning scenarios. We show that our approach outperforms a state-of-the-art graph-search method in computation time and probability of reaching the goal.\",\"PeriodicalId\":13416,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Transportation Systems\",\"volume\":\"25 11\",\"pages\":\"17852-17864\"},\"PeriodicalIF\":7.9000,\"publicationDate\":\"2024-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Transportation Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10693315/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, CIVIL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10693315/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
Motion Primitives as the Action Space of Deep Q-Learning for Planning in Autonomous Driving
Motion planning for autonomous vehicles is commonly implemented via graph-search methods, which pose limitations to the model accuracy and environmental complexity that can be handled under real-time constraints. In contrast, reinforcement learning, specifically the deep Q-learning (DQL) algorithm, provides an interesting alternative for real-time solutions. Some approaches, such as the deep Q-network (DQN), model the RL-action space by quantizing the continuous control inputs. Here, we propose to use motion primitives, which encode continuous-time nonlinear system behavior as the action space. The novel methodology of motion primitives-DQL planning is evaluated in a numerical example using a single-track vehicle model and different planning scenarios. We show that our approach outperforms a state-of-the-art graph-search method in computation time and probability of reaching the goal.
期刊介绍:
The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.