A multi-objective hierarchical deep reinforcement learning algorithm for connected and automated HEVs energy management

IF 5.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Control Engineering Practice Pub Date : 2024-09-25 DOI:10.1016/j.conengprac.2024.106104
Serdar Coskun , Ozan Yazar , Fengqi Zhang , Lin Li , Cong Huang , Hamid Reza Karimi
{"title":"A multi-objective hierarchical deep reinforcement learning algorithm for connected and automated HEVs energy management","authors":"Serdar Coskun ,&nbsp;Ozan Yazar ,&nbsp;Fengqi Zhang ,&nbsp;Lin Li ,&nbsp;Cong Huang ,&nbsp;Hamid Reza Karimi","doi":"10.1016/j.conengprac.2024.106104","DOIUrl":null,"url":null,"abstract":"<div><div>Connected and autonomous vehicles have offered unprecedented opportunities to improve fuel economy and reduce emissions of hybrid electric vehicle (HEV) in vehicular platoons. In this context, a hierarchical control strategy is put forward for connected HEVs. Firstly, we consider a deep deterministic policy gradient (DDPG) algorithm to compute the optimized vehicle speed using a trained optimal policy via vehicle-to-vehicle communication in the upper level. A multi-objective reward function is introduced, integrating vehicle fuel consumption, battery state-of-the-charge, emissions, and vehicle car-following objectives. Secondly, an adaptive equivalent consumption minimization strategy is devised to implement vehicle-level torque allocation in the platoon. Two drive cycles, HWFET and human-in-the-loop simulator driving cycles are utilized for realistic testing of the considered platoon energy management. It is shown that DDPG runs the engine more efficiently than the widely-implemented Q-learning and deep Q-network, thus showing enhanced fuel savings. Further, the contribution of this paper is to speed up the higher-level vehicular control application of deep learning algorithms in the connected and automated HEV platoon energy management applications.</div></div>","PeriodicalId":50615,"journal":{"name":"Control Engineering Practice","volume":"153 ","pages":"Article 106104"},"PeriodicalIF":5.4000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Engineering Practice","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967066124002636","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Connected and autonomous vehicles have offered unprecedented opportunities to improve fuel economy and reduce emissions of hybrid electric vehicle (HEV) in vehicular platoons. In this context, a hierarchical control strategy is put forward for connected HEVs. Firstly, we consider a deep deterministic policy gradient (DDPG) algorithm to compute the optimized vehicle speed using a trained optimal policy via vehicle-to-vehicle communication in the upper level. A multi-objective reward function is introduced, integrating vehicle fuel consumption, battery state-of-the-charge, emissions, and vehicle car-following objectives. Secondly, an adaptive equivalent consumption minimization strategy is devised to implement vehicle-level torque allocation in the platoon. Two drive cycles, HWFET and human-in-the-loop simulator driving cycles are utilized for realistic testing of the considered platoon energy management. It is shown that DDPG runs the engine more efficiently than the widely-implemented Q-learning and deep Q-network, thus showing enhanced fuel savings. Further, the contribution of this paper is to speed up the higher-level vehicular control application of deep learning algorithms in the connected and automated HEV platoon energy management applications.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于互联和自动混合动力汽车能源管理的多目标分层深度强化学习算法
互联和自动驾驶汽车为提高混合动力电动汽车(HEV)的燃油经济性和减少排放提供了前所未有的机遇。在此背景下,我们提出了一种针对联网 HEV 的分层控制策略。首先,我们考虑采用深度确定性策略梯度(DDPG)算法,通过上层的车对车通信,使用训练有素的最优策略计算优化车速。引入了多目标奖励函数,综合了车辆油耗、电池充电状态、排放和车辆跟车目标。其次,设计了一种自适应等效消耗最小化策略,以实现排中的车辆级扭矩分配。利用两种驾驶循环,即 HWFET 和人类在环模拟器驾驶循环,对所考虑的车队能源管理进行了实际测试。结果表明,DDPG 比广泛实施的 Q-learning 和深度 Q 网络更有效地运行发动机,从而提高了节油效果。此外,本文的贡献还在于加速了深度学习算法在互联和自动混合动力汽车车队能源管理应用中更高层次的车辆控制应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Control Engineering Practice
Control Engineering Practice 工程技术-工程:电子与电气
CiteScore
9.20
自引率
12.20%
发文量
183
审稿时长
44 days
期刊介绍: Control Engineering Practice strives to meet the needs of industrial practitioners and industrially related academics and researchers. It publishes papers which illustrate the direct application of control theory and its supporting tools in all possible areas of automation. As a result, the journal only contains papers which can be considered to have made significant contributions to the application of advanced control techniques. It is normally expected that practical results should be included, but where simulation only studies are available, it is necessary to demonstrate that the simulation model is representative of a genuine application. Strictly theoretical papers will find a more appropriate home in Control Engineering Practice''s sister publication, Automatica. It is also expected that papers are innovative with respect to the state of the art and are sufficiently detailed for a reader to be able to duplicate the main results of the paper (supplementary material, including datasets, tables, code and any relevant interactive material can be made available and downloaded from the website). The benefits of the presented methods must be made very clear and the new techniques must be compared and contrasted with results obtained using existing methods. Moreover, a thorough analysis of failures that may happen in the design process and implementation can also be part of the paper. The scope of Control Engineering Practice matches the activities of IFAC. Papers demonstrating the contribution of automation and control in improving the performance, quality, productivity, sustainability, resource and energy efficiency, and the manageability of systems and processes for the benefit of mankind and are relevant to industrial practitioners are most welcome.
期刊最新文献
A switched model predictive control with parametric weights-based mode transition strategy for a novel parallel hybrid electric vehicle An adaptive-node broad learning based incremental model for time-varying nonlinear distributed thermal processes Evaluating the process operating state taking into consideration operator interventions with application to a hot rolling mill process Improved direct ripple power predictive control of single-phase rectifier based on ripple separation Improved sliding mode disturbance observer-based model-free finite-time terminal sliding mode control for IPMSM speed ripple minimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1