Safe Reinforcement Learning-Based Eco-Driving Control for Mixed Traffic Flows With Disturbances

IF 8.4 1区 工程技术 Q1 ENGINEERING, CIVIL IEEE Transactions on Intelligent Transportation Systems Pub Date : 2025-03-04 DOI:10.1109/TITS.2025.3544812
Ke Lu;Dongjun Li;Qun Wang;Kaidi Yang;Lin Zhao;Ziyou Song
{"title":"Safe Reinforcement Learning-Based Eco-Driving Control for Mixed Traffic Flows With Disturbances","authors":"Ke Lu;Dongjun Li;Qun Wang;Kaidi Yang;Lin Zhao;Ziyou Song","doi":"10.1109/TITS.2025.3544812","DOIUrl":null,"url":null,"abstract":"This paper presents a safe learning-based eco-driving framework tailored for mixed traffic flows, which aims to optimize energy efficiency while guaranteeing system constraints during real-system operations. Even though reinforcement learning (RL) is capable of optimizing energy efficiency in intricate environments, it is challenged by safety requirements during both the training and deployment stages. The lack of safety guarantees impedes the application of RL to real-world problems. Compared with RL, model predicted control (MPC) can handle constrained dynamics systems, ensuring safe driving. However, the major challenges lie in complicated eco-driving tasks and the presence of disturbances, which pose difficulties for MPC design and constraint satisfaction. To address these limitations, the proposed framework incorporates the tube-based enhanced MPC (RMPC) to ensure the safe execution of the RL policy under disturbances, thereby improving the control robustness. RL not only optimizes the energy efficiency of the connected and automated vehicle in mixed traffic but also handles more uncertain scenarios, in which the energy consumption of the human-driven vehicle and its diverse and stochastic driving behaviors are considered in the optimization framework. Simulation results demonstrate that the proposed algorithm achieves an average improvement of 10.88% in holistic energy efficiency compared to the RMPC technique, while effectively preventing inter-vehicle collisions when compared to the RL algorithm.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 4","pages":"4948-4959"},"PeriodicalIF":8.4000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10910069/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a safe learning-based eco-driving framework tailored for mixed traffic flows, which aims to optimize energy efficiency while guaranteeing system constraints during real-system operations. Even though reinforcement learning (RL) is capable of optimizing energy efficiency in intricate environments, it is challenged by safety requirements during both the training and deployment stages. The lack of safety guarantees impedes the application of RL to real-world problems. Compared with RL, model predicted control (MPC) can handle constrained dynamics systems, ensuring safe driving. However, the major challenges lie in complicated eco-driving tasks and the presence of disturbances, which pose difficulties for MPC design and constraint satisfaction. To address these limitations, the proposed framework incorporates the tube-based enhanced MPC (RMPC) to ensure the safe execution of the RL policy under disturbances, thereby improving the control robustness. RL not only optimizes the energy efficiency of the connected and automated vehicle in mixed traffic but also handles more uncertain scenarios, in which the energy consumption of the human-driven vehicle and its diverse and stochastic driving behaviors are considered in the optimization framework. Simulation results demonstrate that the proposed algorithm achieves an average improvement of 10.88% in holistic energy efficiency compared to the RMPC technique, while effectively preventing inter-vehicle collisions when compared to the RL algorithm.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于强化学习的安全生态驾驶控制,适用于有干扰的混合交通流
本文提出了一种针对混合交通流的基于安全学习的生态驾驶框架,旨在优化能源效率,同时在实际系统运行中保证系统约束。尽管强化学习(RL)能够在复杂的环境中优化能源效率,但它在训练和部署阶段都受到安全要求的挑战。缺乏安全保证阻碍了强化学习在现实问题中的应用。与RL相比,模型预测控制(MPC)可以处理有约束的动态系统,保证安全驾驶。然而,主要的挑战在于复杂的生态驾驶任务和干扰的存在,这给MPC的设计和约束满足带来了困难。为了解决这些限制,提出的框架结合了基于管的增强MPC (RMPC),以确保在干扰下RL策略的安全执行,从而提高了控制的鲁棒性。强化学习不仅对混合交通中网联自动驾驶车辆的能源效率进行优化,而且还处理了更多不确定场景,其中优化框架考虑了人类驾驶车辆的能源消耗及其多样化和随机的驾驶行为。仿真结果表明,与RMPC技术相比,该算法整体能效平均提高10.88%,与RL算法相比,该算法有效防止了车辆间碰撞。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems 工程技术-工程:电子与电气
CiteScore
14.80
自引率
12.90%
发文量
1872
审稿时长
7.5 months
期刊介绍: The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.
期刊最新文献
IEEE Intelligent Transportation Systems Society Information IEEE Intelligent Transportation Systems Society Information A Multi-Objective Model for Traffic Signal Coordination Control With Queue Profile Estimation Toward Camera Open-Set 3D Object Detection for Autonomous Driving Scenarios Effective Finite Time Stability Control for Human–Machine Shared Vehicle Following System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1