基于强化学习的可再生能源并网发电调度优化技术

IF 9 1区 工程技术 Q1 ENERGY & FUELS Renewable Energy Pub Date : 2024-06-28 DOI:10.1016/j.renene.2024.120886
Awol Seid Ebrie , Young Jin Kim
{"title":"基于强化学习的可再生能源并网发电调度优化技术","authors":"Awol Seid Ebrie ,&nbsp;Young Jin Kim","doi":"10.1016/j.renene.2024.120886","DOIUrl":null,"url":null,"abstract":"<div><p>Power scheduling is an NP-hard optimization problem that demands a delicate equilibrium between economic costs and environmental emissions. In response to the growing concern for climate change, global environmental policies prioritize decarbonizing the electricity sector by integrating renewable energies (REs) into power grids. While this integration brings economic and environmental benefits, the intermittency of REs amplifies the uncertainty and complexity of power scheduling. Existing optimization approaches often grapple with a limited number of units, overlook critical parameters, and disregard the intermittency of REs. To address these limitations, this article introduces a robust and scalable optimization algorithm for renewable integrated power scheduling based on reinforcement learning (RL). In this proposed methodology, the power scheduling problem is decomposed into Markov decision processes (MDPs) within a multi-agent simulation environment. The simulated MDPs are used to train a deep reinforcement learning (DRL) model for solving the optimization. The validity and effectiveness of the proposed method are validated across various test systems, encompassing single-to tri-objective problems with 10–100 generating units. The findings consistently demonstrate the superior performance of the proposed DRL algorithm compared to existing methods, such as multi-agent immune system-based evolutionary priority list (MAI-EPL), binary real-coded genetic algorithm (BRCGA), teaching learning-based optimization (TLBO), quasi-oppositional teaching learning-based algorithm (QOTLBO), hybrid genetic-imperialist competitive algorithm (HGICA), three-stage priority list (TSPL), real-coded grey wolf optimization (RCGWO), multi-objective evolutionary algorithm based on decomposition (MOEAD), and non-dominated sorting algorithms (NSGA-II and NSGA-III). Regarding the experimental results, it is important to highlight the importance of integrating RESs into larger power systems. In a 10-unit system with 2.81 % RE penetration, reductions of 3.42 %, 4.03 %, and 3.10 % were observed in costs, CO<sub>2</sub> emissions, and SO<sub>2</sub> emissions, respectively. Similarly, in a 100-unit system with a RE penetration rate of only 0.28 %, reductions of 3.75 % in cost, 4.42 % in CO<sub>2</sub>, and 3.34 % in SO<sub>2</sub> were observed. These findings emphasize the effectiveness of RES integration, even at lower penetration rates, in larger-scale power systems.</p></div>","PeriodicalId":419,"journal":{"name":"Renewable Energy","volume":null,"pages":null},"PeriodicalIF":9.0000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning-based optimization for power scheduling in a renewable energy connected grid\",\"authors\":\"Awol Seid Ebrie ,&nbsp;Young Jin Kim\",\"doi\":\"10.1016/j.renene.2024.120886\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Power scheduling is an NP-hard optimization problem that demands a delicate equilibrium between economic costs and environmental emissions. In response to the growing concern for climate change, global environmental policies prioritize decarbonizing the electricity sector by integrating renewable energies (REs) into power grids. While this integration brings economic and environmental benefits, the intermittency of REs amplifies the uncertainty and complexity of power scheduling. Existing optimization approaches often grapple with a limited number of units, overlook critical parameters, and disregard the intermittency of REs. To address these limitations, this article introduces a robust and scalable optimization algorithm for renewable integrated power scheduling based on reinforcement learning (RL). In this proposed methodology, the power scheduling problem is decomposed into Markov decision processes (MDPs) within a multi-agent simulation environment. The simulated MDPs are used to train a deep reinforcement learning (DRL) model for solving the optimization. The validity and effectiveness of the proposed method are validated across various test systems, encompassing single-to tri-objective problems with 10–100 generating units. The findings consistently demonstrate the superior performance of the proposed DRL algorithm compared to existing methods, such as multi-agent immune system-based evolutionary priority list (MAI-EPL), binary real-coded genetic algorithm (BRCGA), teaching learning-based optimization (TLBO), quasi-oppositional teaching learning-based algorithm (QOTLBO), hybrid genetic-imperialist competitive algorithm (HGICA), three-stage priority list (TSPL), real-coded grey wolf optimization (RCGWO), multi-objective evolutionary algorithm based on decomposition (MOEAD), and non-dominated sorting algorithms (NSGA-II and NSGA-III). Regarding the experimental results, it is important to highlight the importance of integrating RESs into larger power systems. In a 10-unit system with 2.81 % RE penetration, reductions of 3.42 %, 4.03 %, and 3.10 % were observed in costs, CO<sub>2</sub> emissions, and SO<sub>2</sub> emissions, respectively. Similarly, in a 100-unit system with a RE penetration rate of only 0.28 %, reductions of 3.75 % in cost, 4.42 % in CO<sub>2</sub>, and 3.34 % in SO<sub>2</sub> were observed. These findings emphasize the effectiveness of RES integration, even at lower penetration rates, in larger-scale power systems.</p></div>\",\"PeriodicalId\":419,\"journal\":{\"name\":\"Renewable Energy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2024-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Renewable Energy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0960148124009546\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Renewable Energy","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0960148124009546","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

摘要

电力调度是一个 NP 难优化问题,需要在经济成本和环境排放之间实现微妙的平衡。为了应对日益严重的气候变化问题,全球环境政策优先考虑通过将可再生能源(REs)并入电网来实现电力行业的去碳化。虽然这种整合带来了经济和环境效益,但可再生能源的间歇性加剧了电力调度的不确定性和复杂性。现有的优化方法通常只能处理有限数量的机组,忽略了关键参数,也忽视了可再生能源的间歇性。为了解决这些局限性,本文介绍了一种基于强化学习(RL)的可再生综合电力调度的稳健且可扩展的优化算法。在该方法中,电力调度问题被分解为多代理仿真环境中的马尔可夫决策过程(MDP)。模拟的 MDP 被用来训练一个深度强化学习(DRL)模型,以解决优化问题。所提方法的有效性和有效性在各种测试系统中都得到了验证,包括具有 10-100 个发电单元的单目标到三目标问题。研究结果一致表明,与基于多代理免疫系统的进化优先级列表(MAI-EPL)、二进制实编码遗传算法(BRCGA)、基于教学的优化(TLBO)等现有方法相比,所提出的 DRL 算法性能优越、混合遗传-帝国主义竞争算法(HGICA)、三阶段优先列表(TSPL)、实编码灰狼优化(RCGWO)、基于分解的多目标进化算法(MOEAD)和非支配排序算法(NSGA-II 和 NSGA-III)。关于实验结果,有必要强调将可再生能源纳入大型电力系统的重要性。在一个可再生能源渗透率为 2.81% 的 10 个单位的系统中,成本、二氧化碳排放量和二氧化硫排放量分别降低了 3.42%、4.03% 和 3.10%。同样,在一个可再生能源渗透率仅为 0.28 % 的 100 个单位的系统中,成本降低了 3.75 %,二氧化碳降低了 4.42 %,二氧化硫降低了 3.34 %。这些研究结果表明,即使在较低的渗透率下,在较大规模的电力系统中整合可再生能源也是有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reinforcement learning-based optimization for power scheduling in a renewable energy connected grid

Power scheduling is an NP-hard optimization problem that demands a delicate equilibrium between economic costs and environmental emissions. In response to the growing concern for climate change, global environmental policies prioritize decarbonizing the electricity sector by integrating renewable energies (REs) into power grids. While this integration brings economic and environmental benefits, the intermittency of REs amplifies the uncertainty and complexity of power scheduling. Existing optimization approaches often grapple with a limited number of units, overlook critical parameters, and disregard the intermittency of REs. To address these limitations, this article introduces a robust and scalable optimization algorithm for renewable integrated power scheduling based on reinforcement learning (RL). In this proposed methodology, the power scheduling problem is decomposed into Markov decision processes (MDPs) within a multi-agent simulation environment. The simulated MDPs are used to train a deep reinforcement learning (DRL) model for solving the optimization. The validity and effectiveness of the proposed method are validated across various test systems, encompassing single-to tri-objective problems with 10–100 generating units. The findings consistently demonstrate the superior performance of the proposed DRL algorithm compared to existing methods, such as multi-agent immune system-based evolutionary priority list (MAI-EPL), binary real-coded genetic algorithm (BRCGA), teaching learning-based optimization (TLBO), quasi-oppositional teaching learning-based algorithm (QOTLBO), hybrid genetic-imperialist competitive algorithm (HGICA), three-stage priority list (TSPL), real-coded grey wolf optimization (RCGWO), multi-objective evolutionary algorithm based on decomposition (MOEAD), and non-dominated sorting algorithms (NSGA-II and NSGA-III). Regarding the experimental results, it is important to highlight the importance of integrating RESs into larger power systems. In a 10-unit system with 2.81 % RE penetration, reductions of 3.42 %, 4.03 %, and 3.10 % were observed in costs, CO2 emissions, and SO2 emissions, respectively. Similarly, in a 100-unit system with a RE penetration rate of only 0.28 %, reductions of 3.75 % in cost, 4.42 % in CO2, and 3.34 % in SO2 were observed. These findings emphasize the effectiveness of RES integration, even at lower penetration rates, in larger-scale power systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Renewable Energy
Renewable Energy 工程技术-能源与燃料
CiteScore
18.40
自引率
9.20%
发文量
1955
审稿时长
6.6 months
期刊介绍: Renewable Energy journal is dedicated to advancing knowledge and disseminating insights on various topics and technologies within renewable energy systems and components. Our mission is to support researchers, engineers, economists, manufacturers, NGOs, associations, and societies in staying updated on new developments in their respective fields and applying alternative energy solutions to current practices. As an international, multidisciplinary journal in renewable energy engineering and research, we strive to be a premier peer-reviewed platform and a trusted source of original research and reviews in the field of renewable energy. Join us in our endeavor to drive innovation and progress in sustainable energy solutions.
期刊最新文献
Experimental and numerical analysis of the geometry of a laboratory-scale overtopping wave energy converter using constructal design An analytical and adaptive method for solar photovoltaic modules parameters extraction Performance analysis of a pilot gasification system of biomass with stepwise intake of air-steam considering waste heat utilization A techno-economic feasibility analysis of solutions to cover the thermal and electrical demands of anaerobic digesters Bi-Layer Model Predictive Control strategy for techno-economic operation of grid-connected microgrids
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1