Knowledge transfer from simple to complex: A safe and efficient reinforcement learning framework for autonomous driving decision-making

IF 9.9 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Advanced Engineering Informatics Pub Date : 2025-05-01 Epub Date: 2025-02-20 DOI:10.1016/j.aei.2025.103188
Rongliang Zhou , Jiakun Huang , Mingjun Li , Hepeng Li , Haotian Cao , Xiaolin Song
{"title":"Knowledge transfer from simple to complex: A safe and efficient reinforcement learning framework for autonomous driving decision-making","authors":"Rongliang Zhou ,&nbsp;Jiakun Huang ,&nbsp;Mingjun Li ,&nbsp;Hepeng Li ,&nbsp;Haotian Cao ,&nbsp;Xiaolin Song","doi":"10.1016/j.aei.2025.103188","DOIUrl":null,"url":null,"abstract":"<div><div>A safe and efficient decision-making system is crucial for autonomous vehicles. However, the complexity of driving environments often limits the effectiveness of many rule-based and machine learning approaches. Reinforcement learning (RL), with its robust self-learning capabilities and adaptability to diverse environments, offers a promising solution. Despite this, concerns about safety and efficiency during the training phase have hindered its widespread adoption. To address these challenges, we propose a novel RL framework, Simple to Complex Collaborative Decision (S2CD), based on the Teacher–Student Framework (TSF) to facilitate safe and efficient knowledge transfer. In this approach, the teacher model is first trained rapidly in a lightweight simulation environment. During the training of the student model in more complex environments, the teacher evaluates the student’s selected actions to prevent suboptimal behavior. Besides, to enhance performance further, we introduce an RL algorithm called Adaptive Clipping Proximal Policy Optimization Plus (ACPPO+), which combines samples from both teacher and student policies while utilizing dynamic clipping strategies based on sample importance. This approach improves sample efficiency and mitigates data imbalance. Additionally, Kullback–Leibler (KL) divergence is employed as a policy constraint to accelerate the student’s learning process. A gradual weaning strategy is then used to enable the student to explore independently, overcoming the limitations of the teacher. Moreover, to provide model interpretability, the Layer-wise Relevance Propagation (LRP) technique is applied. Simulation experiments conducted in highway lane-change scenarios demonstrate that S2CD significantly enhances training efficiency and safety while reducing training costs. Even when guided by suboptimal teachers, the student consistently outperforms expectations, showcasing the robustness and effectiveness of the S2CD framework.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103188"},"PeriodicalIF":9.9000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625000813","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/20 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

A safe and efficient decision-making system is crucial for autonomous vehicles. However, the complexity of driving environments often limits the effectiveness of many rule-based and machine learning approaches. Reinforcement learning (RL), with its robust self-learning capabilities and adaptability to diverse environments, offers a promising solution. Despite this, concerns about safety and efficiency during the training phase have hindered its widespread adoption. To address these challenges, we propose a novel RL framework, Simple to Complex Collaborative Decision (S2CD), based on the Teacher–Student Framework (TSF) to facilitate safe and efficient knowledge transfer. In this approach, the teacher model is first trained rapidly in a lightweight simulation environment. During the training of the student model in more complex environments, the teacher evaluates the student’s selected actions to prevent suboptimal behavior. Besides, to enhance performance further, we introduce an RL algorithm called Adaptive Clipping Proximal Policy Optimization Plus (ACPPO+), which combines samples from both teacher and student policies while utilizing dynamic clipping strategies based on sample importance. This approach improves sample efficiency and mitigates data imbalance. Additionally, Kullback–Leibler (KL) divergence is employed as a policy constraint to accelerate the student’s learning process. A gradual weaning strategy is then used to enable the student to explore independently, overcoming the limitations of the teacher. Moreover, to provide model interpretability, the Layer-wise Relevance Propagation (LRP) technique is applied. Simulation experiments conducted in highway lane-change scenarios demonstrate that S2CD significantly enhances training efficiency and safety while reducing training costs. Even when guided by suboptimal teachers, the student consistently outperforms expectations, showcasing the robustness and effectiveness of the S2CD framework.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从简单到复杂的知识转移:一个安全有效的自动驾驶决策强化学习框架
安全高效的决策系统对自动驾驶汽车至关重要。然而,驾驶环境的复杂性往往限制了许多基于规则和机器学习方法的有效性。强化学习(RL)以其强大的自学习能力和对各种环境的适应性提供了一个很有前途的解决方案。尽管如此,在培训阶段对安全和效率的关注阻碍了它的广泛采用。为了应对这些挑战,我们提出了一种新的强化学习框架,即基于师生框架(TSF)的从简单到复杂的协同决策(S2CD),以促进安全有效的知识转移。在这种方法中,教师模型首先在轻量级仿真环境中进行快速训练。在更复杂的环境中训练学生模型时,教师评估学生选择的行为,以防止次优行为。此外,为了进一步提高性能,我们引入了一种称为自适应裁剪近端策略优化Plus (ACPPO+)的强化学习算法,该算法结合了来自教师和学生策略的样本,同时利用基于样本重要性的动态裁剪策略。这种方法提高了样本效率,减轻了数据不平衡。此外,采用Kullback-Leibler (KL)散度作为政策约束来加速学生的学习过程。然后使用渐进的断奶策略,使学生能够独立探索,克服教师的限制。此外,为了提供模型的可解释性,采用了分层相关传播(LRP)技术。高速公路变道场景的仿真实验表明,S2CD在降低培训成本的同时,显著提高了培训效率和安全性。即使在次优教师的指导下,学生的表现也始终超出预期,展示了S2CD框架的稳健性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Advanced Engineering Informatics
Advanced Engineering Informatics 工程技术-工程:综合
CiteScore
12.40
自引率
18.20%
发文量
292
审稿时长
45 days
期刊介绍: Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.
期刊最新文献
Automated generation of assembly schedules for precast building projects under uncertainty using reinforcement learning and Monte Carlo sampling Continual health prognosis of machines via hypergraph topology-aware knowledge preserving and replay Application of GAN-based data augmentation and filtering methods for imbalanced grinding wheel specification classification A physics-informed and stochastic KAN framework for car-following behavior modeling of human-driven vehicles in mixed traffic flow Singularity-free prescribed performance control of a quadrotor UAV for precision agriculture
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1