Optimal control of HVAC systems through active disturbance rejection control-assisted reinforcement learning

IF 9.4 1区 工程技术 Q1 ENERGY & FUELS Energy Pub Date : 2025-03-25 DOI:10.1016/j.energy.2025.135824
Can Cui, Jiahui Xue, Lanjun Liu
{"title":"Optimal control of HVAC systems through active disturbance rejection control-assisted reinforcement learning","authors":"Can Cui,&nbsp;Jiahui Xue,&nbsp;Lanjun Liu","doi":"10.1016/j.energy.2025.135824","DOIUrl":null,"url":null,"abstract":"<div><div>Optimal control of multi-zone HVAC systems may suffer from noise and disturbances that affect control accuracy and performance, and faces computational challenges caused by multiple control variables. To address these challenges, this paper proposes a novel method that incorporates reinforcement learning and active disturbance rejection control through a main-auxiliary controller configuration. A main controller is designed based on twin delayed deep deterministic policy gradient, which is responsible for controlling zone supply airflows. An auxiliary controller is configured based on active disturbance rejection control, which regulates the fresh air ratio and meanwhile handling the disturbances and uncertainties. The two controllers work in parallel with exchange information in real-time to optimize HVAC systems in dynamically uncertain environments. In the proposed method, the control variables are separated and handled by main and auxiliary controllers respectively, which reduces the action space of reinforcement learning algorithm and partly decouples the thermal loads and ventilation loads. An EnergyPlus-Python co-simulation platform has been developed using real-world data. Test results demonstrate that the proposed AD-RL method can enhance indoor comfort and IAQ. Furthermore, compared to the rule-based method and the classical TD3-based approach, it can reduce the daily HVAC energy consumption by up to 22.37 % and 13.53 %, respectively.</div></div>","PeriodicalId":11647,"journal":{"name":"Energy","volume":"323 ","pages":"Article 135824"},"PeriodicalIF":9.4000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360544225014665","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

Abstract

Optimal control of multi-zone HVAC systems may suffer from noise and disturbances that affect control accuracy and performance, and faces computational challenges caused by multiple control variables. To address these challenges, this paper proposes a novel method that incorporates reinforcement learning and active disturbance rejection control through a main-auxiliary controller configuration. A main controller is designed based on twin delayed deep deterministic policy gradient, which is responsible for controlling zone supply airflows. An auxiliary controller is configured based on active disturbance rejection control, which regulates the fresh air ratio and meanwhile handling the disturbances and uncertainties. The two controllers work in parallel with exchange information in real-time to optimize HVAC systems in dynamically uncertain environments. In the proposed method, the control variables are separated and handled by main and auxiliary controllers respectively, which reduces the action space of reinforcement learning algorithm and partly decouples the thermal loads and ventilation loads. An EnergyPlus-Python co-simulation platform has been developed using real-world data. Test results demonstrate that the proposed AD-RL method can enhance indoor comfort and IAQ. Furthermore, compared to the rule-based method and the classical TD3-based approach, it can reduce the daily HVAC energy consumption by up to 22.37 % and 13.53 %, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过主动干扰抑制控制辅助强化学习优化暖通空调系统控制
多区域暖通空调系统的最优控制可能会受到噪声和干扰的影响,影响控制精度和性能,并且面临多个控制变量带来的计算挑战。为了解决这些挑战,本文提出了一种通过主辅控制器配置结合强化学习和自抗扰控制的新方法。设计了一种基于双延迟深度确定性策略梯度的主控制器,负责控制区域供气气流。在自抗扰控制的基础上配置辅助控制器,在调节新风比的同时处理扰动和不确定性。两个控制器并行工作,实时交换信息,以优化动态不确定环境下的HVAC系统。该方法将控制变量分别由主控制器和辅助控制器分离处理,减小了强化学习算法的作用空间,并对热负荷和通风负荷进行了部分解耦。EnergyPlus-Python联合仿真平台使用了真实世界的数据。试验结果表明,AD-RL方法可以提高室内舒适度和室内空气质量。与基于规则的方法和经典的基于td3的方法相比,该方法可分别降低22.37%和13.53%的日暖通空调能耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Energy
Energy 工程技术-能源与燃料
CiteScore
15.30
自引率
14.40%
发文量
0
审稿时长
14.2 weeks
期刊介绍: Energy is a multidisciplinary, international journal that publishes research and analysis in the field of energy engineering. Our aim is to become a leading peer-reviewed platform and a trusted source of information for energy-related topics. The journal covers a range of areas including mechanical engineering, thermal sciences, and energy analysis. We are particularly interested in research on energy modelling, prediction, integrated energy systems, planning, and management. Additionally, we welcome papers on energy conservation, efficiency, biomass and bioenergy, renewable energy, electricity supply and demand, energy storage, buildings, and economic and policy issues. These topics should align with our broader multidisciplinary focus.
期刊最新文献
Zero-carbon microgrid energy system with seasonal hydrogen storage for high-proportion renewable energy consumption Exploring the role of nano-enhanced PCMs in indirect solar drying for sustainable guava dehydration with enhanced thermal storage and product stability Global assessment of wind-solar hybrid systems: unraveling physical constraints and economic potential for sustainable energy deployment A novel architecture for enhanced thermal management in fuel cell cooling systems Redesigning energy transition pathways: Integrating mineral constraints and circular economy in net-zero power system planning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1