智能能源系统中的最佳能源管理:深度强化学习方法和数字孪生案例研究

IF 5.4 Q2 ENERGY & FUELS Smart Energy Pub Date : 2024-10-16 DOI:10.1016/j.segy.2024.100163
Dhekra Bousnina , Gilles Guerassimoff
{"title":"智能能源系统中的最佳能源管理:深度强化学习方法和数字孪生案例研究","authors":"Dhekra Bousnina ,&nbsp;Gilles Guerassimoff","doi":"10.1016/j.segy.2024.100163","DOIUrl":null,"url":null,"abstract":"<div><div>This research work introduces a novel approach to energy management in Smart Energy Systems (SES) using Deep Reinforcement Learning (DRL) to optimize the management of flexible energy systems in SES, including heating, cooling and electricity storage systems along with District Heating and Cooling Systems (DHCS). The proposed approach is applied on Meridia Smart Energy (MSE), a french demonstration project for SES. The proposed DRL framework, based on actor–critic architecture, is first applied on a Modelica digital twin that we developed for the MSE SES, and is benchmarked against a rule-based approach. The DRL agent learnt an effective strategy for managing thermal and electrical storage systems, resulting in optimized energy costs within the SES. Notably, the acquired strategy achieved annual cost reduction of at least 5% compared to the rule-based benchmark strategy. Moreover, the near-real time decision-making capabilities of the trained DRL agent provides a significant advantage over traditional optimization methods that require time-consuming re-computation at each decision point. By training the DRL agent on a digital twin of the real-world MSE project, rather than hypothetical simulation models, this study lays the foundation for a pioneering application of DRL in the real-world MSE SES, showcasing its potential for practical implementation.</div></div>","PeriodicalId":34738,"journal":{"name":"Smart Energy","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimal energy management in smart energy systems: A deep reinforcement learning approach and a digital twin case-study\",\"authors\":\"Dhekra Bousnina ,&nbsp;Gilles Guerassimoff\",\"doi\":\"10.1016/j.segy.2024.100163\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This research work introduces a novel approach to energy management in Smart Energy Systems (SES) using Deep Reinforcement Learning (DRL) to optimize the management of flexible energy systems in SES, including heating, cooling and electricity storage systems along with District Heating and Cooling Systems (DHCS). The proposed approach is applied on Meridia Smart Energy (MSE), a french demonstration project for SES. The proposed DRL framework, based on actor–critic architecture, is first applied on a Modelica digital twin that we developed for the MSE SES, and is benchmarked against a rule-based approach. The DRL agent learnt an effective strategy for managing thermal and electrical storage systems, resulting in optimized energy costs within the SES. Notably, the acquired strategy achieved annual cost reduction of at least 5% compared to the rule-based benchmark strategy. Moreover, the near-real time decision-making capabilities of the trained DRL agent provides a significant advantage over traditional optimization methods that require time-consuming re-computation at each decision point. By training the DRL agent on a digital twin of the real-world MSE project, rather than hypothetical simulation models, this study lays the foundation for a pioneering application of DRL in the real-world MSE SES, showcasing its potential for practical implementation.</div></div>\",\"PeriodicalId\":34738,\"journal\":{\"name\":\"Smart Energy\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart Energy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666955224000339\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Energy","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666955224000339","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

摘要

这项研究工作介绍了一种新颖的智能能源系统(SES)能源管理方法,利用深度强化学习(DRL)优化智能能源系统中灵活能源系统的管理,包括供热、制冷和电力存储系统以及区域供热和制冷系统(DHCS)。所提出的方法适用于法国的 SES 示范项目 Meridia Smart Energy (MSE)。提议的 DRL 框架基于行为批判架构,首先应用于我们为 MSE SES 开发的 Modelica 数字孪生系统,并以基于规则的方法为基准。DRL 代理学习了管理热能和电力存储系统的有效策略,从而优化了 SES 的能源成本。值得注意的是,与基于规则的基准策略相比,所获得的策略实现了每年至少 5% 的成本降低。此外,与需要在每个决策点进行耗时的重新计算的传统优化方法相比,训练有素的 DRL 代理的近实时决策能力具有显著优势。通过在现实世界 MSE 项目的数字孪生而非假设的仿真模型上训练 DRL 代理,本研究为 DRL 在现实世界 MSE SES 中的开创性应用奠定了基础,展示了其实际应用的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Optimal energy management in smart energy systems: A deep reinforcement learning approach and a digital twin case-study
This research work introduces a novel approach to energy management in Smart Energy Systems (SES) using Deep Reinforcement Learning (DRL) to optimize the management of flexible energy systems in SES, including heating, cooling and electricity storage systems along with District Heating and Cooling Systems (DHCS). The proposed approach is applied on Meridia Smart Energy (MSE), a french demonstration project for SES. The proposed DRL framework, based on actor–critic architecture, is first applied on a Modelica digital twin that we developed for the MSE SES, and is benchmarked against a rule-based approach. The DRL agent learnt an effective strategy for managing thermal and electrical storage systems, resulting in optimized energy costs within the SES. Notably, the acquired strategy achieved annual cost reduction of at least 5% compared to the rule-based benchmark strategy. Moreover, the near-real time decision-making capabilities of the trained DRL agent provides a significant advantage over traditional optimization methods that require time-consuming re-computation at each decision point. By training the DRL agent on a digital twin of the real-world MSE project, rather than hypothetical simulation models, this study lays the foundation for a pioneering application of DRL in the real-world MSE SES, showcasing its potential for practical implementation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Smart Energy
Smart Energy Engineering-Mechanical Engineering
CiteScore
9.20
自引率
0.00%
发文量
29
审稿时长
73 days
期刊最新文献
Predictive building energy management with user feedback in the loop Optimal energy management in smart energy systems: A deep reinforcement learning approach and a digital twin case-study Economic viability of decentralised battery storage systems for single-family buildings up to cross-building utilisation The impact of offshore energy hub and hydrogen integration on the Faroe Island’s energy system The cost of CO2 emissions abatement in a micro energy community in a Belgian context
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1