Branka Mirchevska, Christian Pek, M. Werling, M. Althoff, J. Boedecker
{"title":"High-level Decision Making for Safe and Reasonable Autonomous Lane Changing using Reinforcement Learning","authors":"Branka Mirchevska, Christian Pek, M. Werling, M. Althoff, J. Boedecker","doi":"10.1109/ITSC.2018.8569448","DOIUrl":null,"url":null,"abstract":"Machine learning techniques have been shown to outperform many rule-based systems for the decision-making of autonomous vehicles. However, applying machine learning is challenging due to the possibility of executing unsafe actions and slow learning rates. We address these issues by presenting a reinforcement learning-based approach, which is combined with formal safety verification to ensure that only safe actions are chosen at any time. We let a deep reinforcement learning (RL) agent learn to drive as close as possible to a desired velocity by executing reasonable lane changes on simulated highways with an arbitrary number of lanes. By making use of a minimal state representation, consisting of only 13 continuous features, and a Deep Q-Network (DQN), we are able to achieve fast learning rates. Our RL agent is able to learn the desired task without causing collisions and outperforms a complex, rule-based agent that we use for benchmarking.","PeriodicalId":395239,"journal":{"name":"2018 21st International Conference on Intelligent Transportation Systems (ITSC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"116","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 21st International Conference on Intelligent Transportation Systems (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC.2018.8569448","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 116
Abstract
Machine learning techniques have been shown to outperform many rule-based systems for the decision-making of autonomous vehicles. However, applying machine learning is challenging due to the possibility of executing unsafe actions and slow learning rates. We address these issues by presenting a reinforcement learning-based approach, which is combined with formal safety verification to ensure that only safe actions are chosen at any time. We let a deep reinforcement learning (RL) agent learn to drive as close as possible to a desired velocity by executing reasonable lane changes on simulated highways with an arbitrary number of lanes. By making use of a minimal state representation, consisting of only 13 continuous features, and a Deep Q-Network (DQN), we are able to achieve fast learning rates. Our RL agent is able to learn the desired task without causing collisions and outperforms a complex, rule-based agent that we use for benchmarking.