Dong Hu , Hui Xie , Kang Song , Yuanyuan Zhang , Long Yan
{"title":"An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles","authors":"Dong Hu , Hui Xie , Kang Song , Yuanyuan Zhang , Long Yan","doi":"10.1016/j.apenergy.2023.121227","DOIUrl":null,"url":null,"abstract":"<div><p>Deep reinforcement learning (DRL) is a potential solution to develop efficient energy management strategies (EMS) for hybrid electric vehicles (HEV) that can adapt to the changing topology of electrified powertrains and the uncertainty of various driving scenarios. However, traditional DRL has many disadvantages, such as low efficiency and poor stability. This study proposes an apprenticeship-reinforcement learning (A-RL) framework based on expert demonstration (ED) model embedding to improve DRL. First, the demonstration data, calculated by dynamic programming (DP), were collected, and domain adaptive meta-learning (DAML) was used to train the ED model with the adaptive capability of working conditions. Then combined apprenticeship learning (AL) with DRL, and the ED model was used to guide the DRL to output action. The method was validated on three HEV models, and the results show that the training convergence rate increases significantly under the framework. The average increase that the apprenticeship-deep deterministic policy gradient (A-DDPG) based method applied to three HEVs achieved was 34.9 %. Apprenticeship-twin delayed twin delayed deep deterministic policy gradient (A-TD3) achieved 23 % acceleration in the power-split HEV. Because A-DDPG's EMS is more forward-looking and can mimic ED to some extent, the frequency of engine operation in the high-efficiency range has increased. Therefore, A-DDPG can improve the fuel economy of the series hybrid electric bus (HEB) by 0.2–2.7 %, and improvements averaged to about 9.6 % in the series–parallel HEV while maintaining the final SOC. This study aims to improve the sampling efficiency and optimal performance of EMS-based DRL and provide a basis for the design and development of vehicle energy saving and emission reduction.</p></div>","PeriodicalId":246,"journal":{"name":"Applied Energy","volume":"342 ","pages":"Article 121227"},"PeriodicalIF":10.1000,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Energy","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306261923005913","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 2
Abstract
Deep reinforcement learning (DRL) is a potential solution to develop efficient energy management strategies (EMS) for hybrid electric vehicles (HEV) that can adapt to the changing topology of electrified powertrains and the uncertainty of various driving scenarios. However, traditional DRL has many disadvantages, such as low efficiency and poor stability. This study proposes an apprenticeship-reinforcement learning (A-RL) framework based on expert demonstration (ED) model embedding to improve DRL. First, the demonstration data, calculated by dynamic programming (DP), were collected, and domain adaptive meta-learning (DAML) was used to train the ED model with the adaptive capability of working conditions. Then combined apprenticeship learning (AL) with DRL, and the ED model was used to guide the DRL to output action. The method was validated on three HEV models, and the results show that the training convergence rate increases significantly under the framework. The average increase that the apprenticeship-deep deterministic policy gradient (A-DDPG) based method applied to three HEVs achieved was 34.9 %. Apprenticeship-twin delayed twin delayed deep deterministic policy gradient (A-TD3) achieved 23 % acceleration in the power-split HEV. Because A-DDPG's EMS is more forward-looking and can mimic ED to some extent, the frequency of engine operation in the high-efficiency range has increased. Therefore, A-DDPG can improve the fuel economy of the series hybrid electric bus (HEB) by 0.2–2.7 %, and improvements averaged to about 9.6 % in the series–parallel HEV while maintaining the final SOC. This study aims to improve the sampling efficiency and optimal performance of EMS-based DRL and provide a basis for the design and development of vehicle energy saving and emission reduction.
期刊介绍:
Applied Energy serves as a platform for sharing innovations, research, development, and demonstrations in energy conversion, conservation, and sustainable energy systems. The journal covers topics such as optimal energy resource use, environmental pollutant mitigation, and energy process analysis. It welcomes original papers, review articles, technical notes, and letters to the editor. Authors are encouraged to submit manuscripts that bridge the gap between research, development, and implementation. The journal addresses a wide spectrum of topics, including fossil and renewable energy technologies, energy economics, and environmental impacts. Applied Energy also explores modeling and forecasting, conservation strategies, and the social and economic implications of energy policies, including climate change mitigation. It is complemented by the open-access journal Advances in Applied Energy.