Chuyan Zhang, A. Ni, Ce Yu, Linjie Gao, Qinqin Chen
{"title":"基于强化学习的自动驾驶车辆变道模型","authors":"Chuyan Zhang, A. Ni, Ce Yu, Linjie Gao, Qinqin Chen","doi":"10.1109/ICNSC55942.2022.10004106","DOIUrl":null,"url":null,"abstract":"Lane changing has a great impact on traffic efficiency and safety. A reasonably designed lane-changing model for automated vehicles is of great significance for the improvement of traffic safety and efficiency. In most traditional lane changing models, the constrained optimization model needs to be established and solved in the whole process, while in reinforcement learning, only the current state is taken as the input and the action is directly output to the vehicle. Based on the deep deterministic gradient strategy in reinforcement learning algorithm, a new lane changing model of automatic driving vehicle is constructed, which can control the lateral and longitudinal movements of the vehicle at the same time. Safety, efficiency, clearance, headway and comfort characteristics are combined as reward functions for optimizing its performance. The safe modification model is proposed to prevent unsafe behavior at each time step. The proposed model quickly converges in training phase. Compared with the human drivers, it can make safe and efficient lane change in shorter headway and duration. In contrast to conventional dynamic lane-changing trajectory planning model, our model performs better at risk mitigation of collision.","PeriodicalId":230499,"journal":{"name":"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A lane changing model of automatic driving vehicle based on Reinforcement Learning\",\"authors\":\"Chuyan Zhang, A. Ni, Ce Yu, Linjie Gao, Qinqin Chen\",\"doi\":\"10.1109/ICNSC55942.2022.10004106\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lane changing has a great impact on traffic efficiency and safety. A reasonably designed lane-changing model for automated vehicles is of great significance for the improvement of traffic safety and efficiency. In most traditional lane changing models, the constrained optimization model needs to be established and solved in the whole process, while in reinforcement learning, only the current state is taken as the input and the action is directly output to the vehicle. Based on the deep deterministic gradient strategy in reinforcement learning algorithm, a new lane changing model of automatic driving vehicle is constructed, which can control the lateral and longitudinal movements of the vehicle at the same time. Safety, efficiency, clearance, headway and comfort characteristics are combined as reward functions for optimizing its performance. The safe modification model is proposed to prevent unsafe behavior at each time step. The proposed model quickly converges in training phase. Compared with the human drivers, it can make safe and efficient lane change in shorter headway and duration. In contrast to conventional dynamic lane-changing trajectory planning model, our model performs better at risk mitigation of collision.\",\"PeriodicalId\":230499,\"journal\":{\"name\":\"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNSC55942.2022.10004106\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNSC55942.2022.10004106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A lane changing model of automatic driving vehicle based on Reinforcement Learning
Lane changing has a great impact on traffic efficiency and safety. A reasonably designed lane-changing model for automated vehicles is of great significance for the improvement of traffic safety and efficiency. In most traditional lane changing models, the constrained optimization model needs to be established and solved in the whole process, while in reinforcement learning, only the current state is taken as the input and the action is directly output to the vehicle. Based on the deep deterministic gradient strategy in reinforcement learning algorithm, a new lane changing model of automatic driving vehicle is constructed, which can control the lateral and longitudinal movements of the vehicle at the same time. Safety, efficiency, clearance, headway and comfort characteristics are combined as reward functions for optimizing its performance. The safe modification model is proposed to prevent unsafe behavior at each time step. The proposed model quickly converges in training phase. Compared with the human drivers, it can make safe and efficient lane change in shorter headway and duration. In contrast to conventional dynamic lane-changing trajectory planning model, our model performs better at risk mitigation of collision.