基于知识辅助深度强化学习的移动电池储能系统控制

Huan Zhao, Zifan Liu, Xuan Mai, Junhua Zhao, Jing Qiu, Guolong Liu, Zhao Yang Dong, Amer M. Y. M. Ghias
{"title":"基于知识辅助深度强化学习的移动电池储能系统控制","authors":"Huan Zhao,&nbsp;Zifan Liu,&nbsp;Xuan Mai,&nbsp;Junhua Zhao,&nbsp;Jing Qiu,&nbsp;Guolong Liu,&nbsp;Zhao Yang Dong,&nbsp;Amer M. Y. M. Ghias","doi":"10.1049/enc2.12075","DOIUrl":null,"url":null,"abstract":"<p>Most mobile battery energy storage systems (MBESSs) are designed to enhance power system resilience and provide ancillary service for the system operator using energy storage. As the penetration of renewable energy and fluctuation of the electricity price increase in the power system, the demand-side commercial entities can be more profitable utilizing the mobility and flexibility of MBESSs compared to the stational energy storage system. The profit is closely related to the spatiotemporal decision model and is influenced by environmental uncertainties, such as electricity price and traffic conditions. However, solving the real-time control problem considering long-term profit and uncertainties is time-consuming. To address this problem, this paper proposes a deep reinforcement learning framework for MBESSs to maximize profit through market arbitrage. A knowledge-assisted double deep Q network (KA-DDQN) algorithm is proposed based on such framework to learn the optimal policy and increase the learning efficiency. Moreover, two criteria action generation methods of knowledge-assisted learning are proposed for integer actions utilizing scheduling and short-term programming results. Simulation results show that the proposed framework and method can achieve the optimal result, and KA-DDQN can accelerate the learning process compared to the original method by approximately 30%.</p>","PeriodicalId":100467,"journal":{"name":"Energy Conversion and Economics","volume":"3 6","pages":"381-391"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/enc2.12075","citationCount":"1","resultStr":"{\"title\":\"Mobile battery energy storage system control with knowledge-assisted deep reinforcement learning\",\"authors\":\"Huan Zhao,&nbsp;Zifan Liu,&nbsp;Xuan Mai,&nbsp;Junhua Zhao,&nbsp;Jing Qiu,&nbsp;Guolong Liu,&nbsp;Zhao Yang Dong,&nbsp;Amer M. Y. M. Ghias\",\"doi\":\"10.1049/enc2.12075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Most mobile battery energy storage systems (MBESSs) are designed to enhance power system resilience and provide ancillary service for the system operator using energy storage. As the penetration of renewable energy and fluctuation of the electricity price increase in the power system, the demand-side commercial entities can be more profitable utilizing the mobility and flexibility of MBESSs compared to the stational energy storage system. The profit is closely related to the spatiotemporal decision model and is influenced by environmental uncertainties, such as electricity price and traffic conditions. However, solving the real-time control problem considering long-term profit and uncertainties is time-consuming. To address this problem, this paper proposes a deep reinforcement learning framework for MBESSs to maximize profit through market arbitrage. A knowledge-assisted double deep Q network (KA-DDQN) algorithm is proposed based on such framework to learn the optimal policy and increase the learning efficiency. Moreover, two criteria action generation methods of knowledge-assisted learning are proposed for integer actions utilizing scheduling and short-term programming results. Simulation results show that the proposed framework and method can achieve the optimal result, and KA-DDQN can accelerate the learning process compared to the original method by approximately 30%.</p>\",\"PeriodicalId\":100467,\"journal\":{\"name\":\"Energy Conversion and Economics\",\"volume\":\"3 6\",\"pages\":\"381-391\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/enc2.12075\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Energy Conversion and Economics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/enc2.12075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy Conversion and Economics","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/enc2.12075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

大多数移动电池储能系统(MBESSs)旨在增强电力系统的弹性,并为使用储能的系统运营商提供辅助服务。随着可再生能源在电力系统中的渗透率和电价波动的增加,与电站储能系统相比,需求方商业实体可以利用mbess的移动性和灵活性获得更大的利润。利润与时空决策模型密切相关,并受到环境不确定性的影响,如电价和交通状况。然而,考虑到长期利润和不确定性,解决实时控制问题是费时的。为了解决这一问题,本文提出了一个mbess深度强化学习框架,通过市场套利实现利润最大化。在此框架下,提出了一种知识辅助双深度Q网络(KA-DDQN)算法来学习最优策略,提高学习效率。在此基础上,提出了利用调度结果和短期规划结果生成整数动作的两种知识辅助学习准则动作生成方法。仿真结果表明,所提出的框架和方法能够达到最优的学习效果,与原方法相比,KA-DDQN的学习速度提高了约30%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Mobile battery energy storage system control with knowledge-assisted deep reinforcement learning

Most mobile battery energy storage systems (MBESSs) are designed to enhance power system resilience and provide ancillary service for the system operator using energy storage. As the penetration of renewable energy and fluctuation of the electricity price increase in the power system, the demand-side commercial entities can be more profitable utilizing the mobility and flexibility of MBESSs compared to the stational energy storage system. The profit is closely related to the spatiotemporal decision model and is influenced by environmental uncertainties, such as electricity price and traffic conditions. However, solving the real-time control problem considering long-term profit and uncertainties is time-consuming. To address this problem, this paper proposes a deep reinforcement learning framework for MBESSs to maximize profit through market arbitrage. A knowledge-assisted double deep Q network (KA-DDQN) algorithm is proposed based on such framework to learn the optimal policy and increase the learning efficiency. Moreover, two criteria action generation methods of knowledge-assisted learning are proposed for integer actions utilizing scheduling and short-term programming results. Simulation results show that the proposed framework and method can achieve the optimal result, and KA-DDQN can accelerate the learning process compared to the original method by approximately 30%.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A novel online reinforcement learning-based linear quadratic regulator for three-level neutral-point clamped DC/AC inverter Artificial intelligence-driven insights: Precision tracking of power plant carbon emissions using satellite data Forecasting masked-load with invisible distributed energy resources based on transfer learning and Bayesian tuning Collaborative deployment of multiple reinforcement methods for network-loss reduction in distribution system with seasonal loads State-of-health estimation of lithium-ion batteries: A comprehensive literature review from cell to pack levels
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1