Yuxiang Zhang, Xiaoling Liang, S. Ge, B. Gao, Tong-heng Lee
{"title":"基于Barrier Lyapunov函数的不确定性自动驾驶汽车安全强化学习算法","authors":"Yuxiang Zhang, Xiaoling Liang, S. Ge, B. Gao, Tong-heng Lee","doi":"10.23919/ICCAS52745.2021.9649902","DOIUrl":null,"url":null,"abstract":"Guaranteed safety and performance under various circumstances remain technically critical and practically challenging for the wide deployment of autonomous vehicles. For such safety-critical systems, it will certainly be a requirement that safe performance should be ensured even during the reinforcement learning period in the presence of system uncertainty. To address this issue, a Barrier Lyapunov Function-based safe reinforcement learning algorithm (BLF-SRL) is proposed here for the formulated nonlinear system in strict-feedback form. This approach appropriately arranges the Barrier Lyapunov Function item into the optimized backstepping control method to constrain the state-variables in the designed safety region during learning when unknown bounded system uncertainty exists. More specifically, the overall system control is optimized with the optimized backstepping technique under the framework of Actor-Critic, which optimizes the virtual control in every backstepping subsystem. Wherein, the optimal virtual control is decomposed into Barrier Lyapunov Function items; and also with an adaptive item to be learned with deep neural networks, which achieves safe exploration during the learning process. Eventually, the principle of Bellman optimality is satisfied through iteratively updating the independently approximated actor and critic to solve the Hamilton-Jacobi-Bellman equation in adaptive dynamic programming. More notably, the variance of control performance under uncertainty is also reduced with the proposed method. The effectiveness of the proposed method is verified with motion control problems for autonomous vehicles through appropriate comparison simulations.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Barrier Lyapunov Function-Based Safe Reinforcement Learning Algorithm for Autonomous Vehicles with System Uncertainty\",\"authors\":\"Yuxiang Zhang, Xiaoling Liang, S. Ge, B. Gao, Tong-heng Lee\",\"doi\":\"10.23919/ICCAS52745.2021.9649902\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Guaranteed safety and performance under various circumstances remain technically critical and practically challenging for the wide deployment of autonomous vehicles. For such safety-critical systems, it will certainly be a requirement that safe performance should be ensured even during the reinforcement learning period in the presence of system uncertainty. To address this issue, a Barrier Lyapunov Function-based safe reinforcement learning algorithm (BLF-SRL) is proposed here for the formulated nonlinear system in strict-feedback form. This approach appropriately arranges the Barrier Lyapunov Function item into the optimized backstepping control method to constrain the state-variables in the designed safety region during learning when unknown bounded system uncertainty exists. More specifically, the overall system control is optimized with the optimized backstepping technique under the framework of Actor-Critic, which optimizes the virtual control in every backstepping subsystem. Wherein, the optimal virtual control is decomposed into Barrier Lyapunov Function items; and also with an adaptive item to be learned with deep neural networks, which achieves safe exploration during the learning process. Eventually, the principle of Bellman optimality is satisfied through iteratively updating the independently approximated actor and critic to solve the Hamilton-Jacobi-Bellman equation in adaptive dynamic programming. More notably, the variance of control performance under uncertainty is also reduced with the proposed method. The effectiveness of the proposed method is verified with motion control problems for autonomous vehicles through appropriate comparison simulations.\",\"PeriodicalId\":411064,\"journal\":{\"name\":\"2021 21st International Conference on Control, Automation and Systems (ICCAS)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 21st International Conference on Control, Automation and Systems (ICCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ICCAS52745.2021.9649902\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICCAS52745.2021.9649902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Barrier Lyapunov Function-Based Safe Reinforcement Learning Algorithm for Autonomous Vehicles with System Uncertainty
Guaranteed safety and performance under various circumstances remain technically critical and practically challenging for the wide deployment of autonomous vehicles. For such safety-critical systems, it will certainly be a requirement that safe performance should be ensured even during the reinforcement learning period in the presence of system uncertainty. To address this issue, a Barrier Lyapunov Function-based safe reinforcement learning algorithm (BLF-SRL) is proposed here for the formulated nonlinear system in strict-feedback form. This approach appropriately arranges the Barrier Lyapunov Function item into the optimized backstepping control method to constrain the state-variables in the designed safety region during learning when unknown bounded system uncertainty exists. More specifically, the overall system control is optimized with the optimized backstepping technique under the framework of Actor-Critic, which optimizes the virtual control in every backstepping subsystem. Wherein, the optimal virtual control is decomposed into Barrier Lyapunov Function items; and also with an adaptive item to be learned with deep neural networks, which achieves safe exploration during the learning process. Eventually, the principle of Bellman optimality is satisfied through iteratively updating the independently approximated actor and critic to solve the Hamilton-Jacobi-Bellman equation in adaptive dynamic programming. More notably, the variance of control performance under uncertainty is also reduced with the proposed method. The effectiveness of the proposed method is verified with motion control problems for autonomous vehicles through appropriate comparison simulations.