{"title":"Multi-Timescale Reward-Based DRL Energy Management for Regenerative Braking Energy Storage System","authors":"Junyu Chen;Yue Zhao;Minghao Wang;Kai Yang;Yinbo Ge;Ke Wang;Hongjian Lin;Pengyu Pan;Haitao Hu;Zhengyou He;Zhao Xu","doi":"10.1109/TTE.2025.3528255","DOIUrl":null,"url":null,"abstract":"The traditional model-based energy management strategy (EMS) for regenerative braking energy storage systems (RBESSs) is obsoleting in the face of increasingly complex and uncertain operation conditions in railway power systems (RPSs). In this article, a model-free deep reinforcement learning (DRL) method is proposed. First, the multiobjective energy management problem for RBESS is formulated to concurrently achieve the regenerative braking energy (RBE) utilization and power demand shaving of RPS. Then, this problem is modeled as a Markov decision process (MDP) to be solved by the DRL-based method. Specifically, the RBESS controller is modeled as an agent to interact with the environment modeled as the RPS integrated with RBESS. To coordinate the agent to learn the optimal strategies regarding multiple energy management objectives in different timescales, a multistage reward function (MSRF) involving the step reward and final reward is designed. Based on the above elements, the double deep <italic>Q</i>-learning algorithm is applied to train the agent for optimizing the EMS. Finally, the proposed DRL-based EMS is tested on the OPAL-RT experimental platform by using the field load data. Case studies have demonstrated that the proposed method outperforms the traditional rule-based and optimization-based methods by over 5% in the energy management objective.","PeriodicalId":56269,"journal":{"name":"IEEE Transactions on Transportation Electrification","volume":"11 3","pages":"7488-7500"},"PeriodicalIF":8.3000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Transportation Electrification","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10836947/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The traditional model-based energy management strategy (EMS) for regenerative braking energy storage systems (RBESSs) is obsoleting in the face of increasingly complex and uncertain operation conditions in railway power systems (RPSs). In this article, a model-free deep reinforcement learning (DRL) method is proposed. First, the multiobjective energy management problem for RBESS is formulated to concurrently achieve the regenerative braking energy (RBE) utilization and power demand shaving of RPS. Then, this problem is modeled as a Markov decision process (MDP) to be solved by the DRL-based method. Specifically, the RBESS controller is modeled as an agent to interact with the environment modeled as the RPS integrated with RBESS. To coordinate the agent to learn the optimal strategies regarding multiple energy management objectives in different timescales, a multistage reward function (MSRF) involving the step reward and final reward is designed. Based on the above elements, the double deep Q-learning algorithm is applied to train the agent for optimizing the EMS. Finally, the proposed DRL-based EMS is tested on the OPAL-RT experimental platform by using the field load data. Case studies have demonstrated that the proposed method outperforms the traditional rule-based and optimization-based methods by over 5% in the energy management objective.
期刊介绍:
IEEE Transactions on Transportation Electrification is focused on components, sub-systems, systems, standards, and grid interface technologies related to power and energy conversion, propulsion, and actuation for all types of electrified vehicles including on-road, off-road, off-highway, and rail vehicles, airplanes, and ships.