Arash Khalatbarisoltani, M. Kandidayeni, L. Boulon, Xiaosong Hu
{"title":"A Decentralized Multi-agent Energy Management Strategy Based on a Look-Ahead Reinforcement Learning Approach","authors":"Arash Khalatbarisoltani, M. Kandidayeni, L. Boulon, Xiaosong Hu","doi":"10.4271/14-11-02-0012","DOIUrl":null,"url":null,"abstract":"An energy management strategy (EMS) has an essential role in ameliorating the efficiency and lifetime of the powertrain components in a hybrid fuel cell vehicle (HFCV). The EMS of intelligent HFCVs is equipped with advanced data-driven techniques to efficiently distribute the power flow among the power sources, which have heterogeneous energetic characteristics. Decentralized EMSs provide higher modularity (plug and play) and reliability compared to the centralized data-driven strategies. Modularity is the specification that promotes the discovery of new components in a powertrain system without the need for reconfiguration. Hence, this paper puts forward a decentralized reinforcement learning (Dec-RL) framework for designing an EMS in a heavy-duty HFCV. The studied powertrain is composed of two parallel fuel cell systems (FCSs) and a battery pack. The contribution of the suggested multi-agent approach lies in the development of a fully decentralized learning strategy composed of several connected local modules. The performance of the proposed approach is investigated through several simulations and experimental tests. The results indicate the advantage of the established Dec-RL control scheme in convergence speed and optimization criteria.","PeriodicalId":36261,"journal":{"name":"SAE International Journal of Electrified Vehicles","volume":"14 1","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2021-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SAE International Journal of Electrified Vehicles","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4271/14-11-02-0012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
引用次数: 8
Abstract
An energy management strategy (EMS) has an essential role in ameliorating the efficiency and lifetime of the powertrain components in a hybrid fuel cell vehicle (HFCV). The EMS of intelligent HFCVs is equipped with advanced data-driven techniques to efficiently distribute the power flow among the power sources, which have heterogeneous energetic characteristics. Decentralized EMSs provide higher modularity (plug and play) and reliability compared to the centralized data-driven strategies. Modularity is the specification that promotes the discovery of new components in a powertrain system without the need for reconfiguration. Hence, this paper puts forward a decentralized reinforcement learning (Dec-RL) framework for designing an EMS in a heavy-duty HFCV. The studied powertrain is composed of two parallel fuel cell systems (FCSs) and a battery pack. The contribution of the suggested multi-agent approach lies in the development of a fully decentralized learning strategy composed of several connected local modules. The performance of the proposed approach is investigated through several simulations and experimental tests. The results indicate the advantage of the established Dec-RL control scheme in convergence speed and optimization criteria.