{"title":"A Hierarchical Framework for Multi-Lane Autonomous Driving Based on Reinforcement Learning","authors":"Xiaohui Zhang;Jie Sun;Yunpeng Wang;Jian Sun","doi":"10.1109/OJITS.2023.3300748","DOIUrl":null,"url":null,"abstract":"This paper proposes a hierarchical framework integrating deep reinforcement learning (DRL) and rule-based methods for multi-lane autonomous driving. We define an instantaneous desired speed (IDS) to mimic the common motivation for higher speed in different traffic situations as an intermediate action. High-level DRL is utilized to generate IDS directly, while the low-level rule-based policies including car following (CF) models and three-stage lane changing (LC) models are governed by the common goal of IDS. Not only the coupling between CF and LC behaviors is captured by the hierarchy, but also utilizing the benefits from both DRL and rule-based methods like more interpretability and learning ability. Owing to the decomposition and combination with rule-based models, traffic flow operations can be taken into account for individually controlled automated vehicles (AVs) with an extension of traffic flow adaptive (TFA) strategy through exposed critical parameters. A comprehensive evaluation for the proposed framework is conducted from both the individual and system perspective, comparing with a pure DRL model and widely used rule-based model IDM with MOBIL. The simulation results prove the effectiveness of the proposed framework.","PeriodicalId":100631,"journal":{"name":"IEEE Open Journal of Intelligent Transportation Systems","volume":"4 ","pages":"626-638"},"PeriodicalIF":4.6000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8784355/9999144/10198672.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Intelligent Transportation Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10198672/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a hierarchical framework integrating deep reinforcement learning (DRL) and rule-based methods for multi-lane autonomous driving. We define an instantaneous desired speed (IDS) to mimic the common motivation for higher speed in different traffic situations as an intermediate action. High-level DRL is utilized to generate IDS directly, while the low-level rule-based policies including car following (CF) models and three-stage lane changing (LC) models are governed by the common goal of IDS. Not only the coupling between CF and LC behaviors is captured by the hierarchy, but also utilizing the benefits from both DRL and rule-based methods like more interpretability and learning ability. Owing to the decomposition and combination with rule-based models, traffic flow operations can be taken into account for individually controlled automated vehicles (AVs) with an extension of traffic flow adaptive (TFA) strategy through exposed critical parameters. A comprehensive evaluation for the proposed framework is conducted from both the individual and system perspective, comparing with a pure DRL model and widely used rule-based model IDM with MOBIL. The simulation results prove the effectiveness of the proposed framework.