Huan Zhao, Zifan Liu, Xuan Mai, Junhua Zhao, Jing Qiu, Guolong Liu, Zhao Yang Dong, Amer M. Y. M. Ghias
{"title":"基于知识辅助深度强化学习的移动电池储能系统控制","authors":"Huan Zhao, Zifan Liu, Xuan Mai, Junhua Zhao, Jing Qiu, Guolong Liu, Zhao Yang Dong, Amer M. Y. M. Ghias","doi":"10.1049/enc2.12075","DOIUrl":null,"url":null,"abstract":"<p>Most mobile battery energy storage systems (MBESSs) are designed to enhance power system resilience and provide ancillary service for the system operator using energy storage. As the penetration of renewable energy and fluctuation of the electricity price increase in the power system, the demand-side commercial entities can be more profitable utilizing the mobility and flexibility of MBESSs compared to the stational energy storage system. The profit is closely related to the spatiotemporal decision model and is influenced by environmental uncertainties, such as electricity price and traffic conditions. However, solving the real-time control problem considering long-term profit and uncertainties is time-consuming. To address this problem, this paper proposes a deep reinforcement learning framework for MBESSs to maximize profit through market arbitrage. A knowledge-assisted double deep Q network (KA-DDQN) algorithm is proposed based on such framework to learn the optimal policy and increase the learning efficiency. Moreover, two criteria action generation methods of knowledge-assisted learning are proposed for integer actions utilizing scheduling and short-term programming results. Simulation results show that the proposed framework and method can achieve the optimal result, and KA-DDQN can accelerate the learning process compared to the original method by approximately 30%.</p>","PeriodicalId":100467,"journal":{"name":"Energy Conversion and Economics","volume":"3 6","pages":"381-391"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/enc2.12075","citationCount":"1","resultStr":"{\"title\":\"Mobile battery energy storage system control with knowledge-assisted deep reinforcement learning\",\"authors\":\"Huan Zhao, Zifan Liu, Xuan Mai, Junhua Zhao, Jing Qiu, Guolong Liu, Zhao Yang Dong, Amer M. Y. M. Ghias\",\"doi\":\"10.1049/enc2.12075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Most mobile battery energy storage systems (MBESSs) are designed to enhance power system resilience and provide ancillary service for the system operator using energy storage. As the penetration of renewable energy and fluctuation of the electricity price increase in the power system, the demand-side commercial entities can be more profitable utilizing the mobility and flexibility of MBESSs compared to the stational energy storage system. The profit is closely related to the spatiotemporal decision model and is influenced by environmental uncertainties, such as electricity price and traffic conditions. However, solving the real-time control problem considering long-term profit and uncertainties is time-consuming. To address this problem, this paper proposes a deep reinforcement learning framework for MBESSs to maximize profit through market arbitrage. A knowledge-assisted double deep Q network (KA-DDQN) algorithm is proposed based on such framework to learn the optimal policy and increase the learning efficiency. Moreover, two criteria action generation methods of knowledge-assisted learning are proposed for integer actions utilizing scheduling and short-term programming results. Simulation results show that the proposed framework and method can achieve the optimal result, and KA-DDQN can accelerate the learning process compared to the original method by approximately 30%.</p>\",\"PeriodicalId\":100467,\"journal\":{\"name\":\"Energy Conversion and Economics\",\"volume\":\"3 6\",\"pages\":\"381-391\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/enc2.12075\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Energy Conversion and Economics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/enc2.12075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy Conversion and Economics","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/enc2.12075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mobile battery energy storage system control with knowledge-assisted deep reinforcement learning
Most mobile battery energy storage systems (MBESSs) are designed to enhance power system resilience and provide ancillary service for the system operator using energy storage. As the penetration of renewable energy and fluctuation of the electricity price increase in the power system, the demand-side commercial entities can be more profitable utilizing the mobility and flexibility of MBESSs compared to the stational energy storage system. The profit is closely related to the spatiotemporal decision model and is influenced by environmental uncertainties, such as electricity price and traffic conditions. However, solving the real-time control problem considering long-term profit and uncertainties is time-consuming. To address this problem, this paper proposes a deep reinforcement learning framework for MBESSs to maximize profit through market arbitrage. A knowledge-assisted double deep Q network (KA-DDQN) algorithm is proposed based on such framework to learn the optimal policy and increase the learning efficiency. Moreover, two criteria action generation methods of knowledge-assisted learning are proposed for integer actions utilizing scheduling and short-term programming results. Simulation results show that the proposed framework and method can achieve the optimal result, and KA-DDQN can accelerate the learning process compared to the original method by approximately 30%.