{"title":"智能能源系统中的最佳能源管理:深度强化学习方法和数字孪生案例研究","authors":"Dhekra Bousnina , Gilles Guerassimoff","doi":"10.1016/j.segy.2024.100163","DOIUrl":null,"url":null,"abstract":"<div><div>This research work introduces a novel approach to energy management in Smart Energy Systems (SES) using Deep Reinforcement Learning (DRL) to optimize the management of flexible energy systems in SES, including heating, cooling and electricity storage systems along with District Heating and Cooling Systems (DHCS). The proposed approach is applied on Meridia Smart Energy (MSE), a french demonstration project for SES. The proposed DRL framework, based on actor–critic architecture, is first applied on a Modelica digital twin that we developed for the MSE SES, and is benchmarked against a rule-based approach. The DRL agent learnt an effective strategy for managing thermal and electrical storage systems, resulting in optimized energy costs within the SES. Notably, the acquired strategy achieved annual cost reduction of at least 5% compared to the rule-based benchmark strategy. Moreover, the near-real time decision-making capabilities of the trained DRL agent provides a significant advantage over traditional optimization methods that require time-consuming re-computation at each decision point. By training the DRL agent on a digital twin of the real-world MSE project, rather than hypothetical simulation models, this study lays the foundation for a pioneering application of DRL in the real-world MSE SES, showcasing its potential for practical implementation.</div></div>","PeriodicalId":34738,"journal":{"name":"Smart Energy","volume":"16 ","pages":"Article 100163"},"PeriodicalIF":5.4000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimal energy management in smart energy systems: A deep reinforcement learning approach and a digital twin case-study\",\"authors\":\"Dhekra Bousnina , Gilles Guerassimoff\",\"doi\":\"10.1016/j.segy.2024.100163\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This research work introduces a novel approach to energy management in Smart Energy Systems (SES) using Deep Reinforcement Learning (DRL) to optimize the management of flexible energy systems in SES, including heating, cooling and electricity storage systems along with District Heating and Cooling Systems (DHCS). The proposed approach is applied on Meridia Smart Energy (MSE), a french demonstration project for SES. The proposed DRL framework, based on actor–critic architecture, is first applied on a Modelica digital twin that we developed for the MSE SES, and is benchmarked against a rule-based approach. The DRL agent learnt an effective strategy for managing thermal and electrical storage systems, resulting in optimized energy costs within the SES. Notably, the acquired strategy achieved annual cost reduction of at least 5% compared to the rule-based benchmark strategy. Moreover, the near-real time decision-making capabilities of the trained DRL agent provides a significant advantage over traditional optimization methods that require time-consuming re-computation at each decision point. By training the DRL agent on a digital twin of the real-world MSE project, rather than hypothetical simulation models, this study lays the foundation for a pioneering application of DRL in the real-world MSE SES, showcasing its potential for practical implementation.</div></div>\",\"PeriodicalId\":34738,\"journal\":{\"name\":\"Smart Energy\",\"volume\":\"16 \",\"pages\":\"Article 100163\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart Energy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666955224000339\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Energy","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666955224000339","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
摘要
这项研究工作介绍了一种新颖的智能能源系统(SES)能源管理方法,利用深度强化学习(DRL)优化智能能源系统中灵活能源系统的管理,包括供热、制冷和电力存储系统以及区域供热和制冷系统(DHCS)。所提出的方法适用于法国的 SES 示范项目 Meridia Smart Energy (MSE)。提议的 DRL 框架基于行为批判架构,首先应用于我们为 MSE SES 开发的 Modelica 数字孪生系统,并以基于规则的方法为基准。DRL 代理学习了管理热能和电力存储系统的有效策略,从而优化了 SES 的能源成本。值得注意的是,与基于规则的基准策略相比,所获得的策略实现了每年至少 5% 的成本降低。此外,与需要在每个决策点进行耗时的重新计算的传统优化方法相比,训练有素的 DRL 代理的近实时决策能力具有显著优势。通过在现实世界 MSE 项目的数字孪生而非假设的仿真模型上训练 DRL 代理,本研究为 DRL 在现实世界 MSE SES 中的开创性应用奠定了基础,展示了其实际应用的潜力。
Optimal energy management in smart energy systems: A deep reinforcement learning approach and a digital twin case-study
This research work introduces a novel approach to energy management in Smart Energy Systems (SES) using Deep Reinforcement Learning (DRL) to optimize the management of flexible energy systems in SES, including heating, cooling and electricity storage systems along with District Heating and Cooling Systems (DHCS). The proposed approach is applied on Meridia Smart Energy (MSE), a french demonstration project for SES. The proposed DRL framework, based on actor–critic architecture, is first applied on a Modelica digital twin that we developed for the MSE SES, and is benchmarked against a rule-based approach. The DRL agent learnt an effective strategy for managing thermal and electrical storage systems, resulting in optimized energy costs within the SES. Notably, the acquired strategy achieved annual cost reduction of at least 5% compared to the rule-based benchmark strategy. Moreover, the near-real time decision-making capabilities of the trained DRL agent provides a significant advantage over traditional optimization methods that require time-consuming re-computation at each decision point. By training the DRL agent on a digital twin of the real-world MSE project, rather than hypothetical simulation models, this study lays the foundation for a pioneering application of DRL in the real-world MSE SES, showcasing its potential for practical implementation.