{"title":"基于lyapunov的微电网能量管理安全强化学习","authors":"Guokai Hao;Yuanzheng Li;Yang Li;Lin Jiang;Zhigang Zeng","doi":"10.1109/TNNLS.2024.3496932","DOIUrl":null,"url":null,"abstract":"The rapid development of renewable energy sources (RESs) has led to their increased integration into microgrids (MGs), emphasizing the need for safe and efficient energy management in MG operations. We investigate the methods of MG energy management, primarily categorized into model-based and model-free approaches. Due to a lack of incremental knowledge, model-based methods need to be reengineered for new scenarios during the optimization process, leading to reduced computational efficiency. In contrast, model-free methods can obtain incremental knowledge via trial-and-error in the training phase, and output energy management scheme rapidly. However, ensuring the safety of the scheme during the training phases poses significant challenges. To address these challenges, we propose a safe reinforcement learning (SRL) framework. The proposed SRL framework initially includes a safety assessment optimization model (SAOM) to evaluate scheme constraints and refine unsafe schemes for ensuring MG safety. Subsequently, based on SAOM, the MG energy management issue is formulated as an assess-based constrained Markov decision process (A-CMDP), enabling the SRL can be adopted in this issue. After that, we adopt a Lyapunov-based safety policy optimization for agent policy learning to ensure that policy updates are confined within a safe boundary, theoretically ensuring the safety of the MG throughout the learning process. Numerical studies highlight the superior performance of our proposed method. Specifically, the SRL framework effectively learns energy management policy, ensures MG safety, and demonstrates outstanding outcomes in the economic operation of MG.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"9985-9999"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lyapunov-Based Safe Reinforcement Learning for Microgrid Energy Management\",\"authors\":\"Guokai Hao;Yuanzheng Li;Yang Li;Lin Jiang;Zhigang Zeng\",\"doi\":\"10.1109/TNNLS.2024.3496932\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rapid development of renewable energy sources (RESs) has led to their increased integration into microgrids (MGs), emphasizing the need for safe and efficient energy management in MG operations. We investigate the methods of MG energy management, primarily categorized into model-based and model-free approaches. Due to a lack of incremental knowledge, model-based methods need to be reengineered for new scenarios during the optimization process, leading to reduced computational efficiency. In contrast, model-free methods can obtain incremental knowledge via trial-and-error in the training phase, and output energy management scheme rapidly. However, ensuring the safety of the scheme during the training phases poses significant challenges. To address these challenges, we propose a safe reinforcement learning (SRL) framework. The proposed SRL framework initially includes a safety assessment optimization model (SAOM) to evaluate scheme constraints and refine unsafe schemes for ensuring MG safety. Subsequently, based on SAOM, the MG energy management issue is formulated as an assess-based constrained Markov decision process (A-CMDP), enabling the SRL can be adopted in this issue. After that, we adopt a Lyapunov-based safety policy optimization for agent policy learning to ensure that policy updates are confined within a safe boundary, theoretically ensuring the safety of the MG throughout the learning process. Numerical studies highlight the superior performance of our proposed method. Specifically, the SRL framework effectively learns energy management policy, ensures MG safety, and demonstrates outstanding outcomes in the economic operation of MG.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 6\",\"pages\":\"9985-9999\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-12-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10795439/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10795439/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Lyapunov-Based Safe Reinforcement Learning for Microgrid Energy Management
The rapid development of renewable energy sources (RESs) has led to their increased integration into microgrids (MGs), emphasizing the need for safe and efficient energy management in MG operations. We investigate the methods of MG energy management, primarily categorized into model-based and model-free approaches. Due to a lack of incremental knowledge, model-based methods need to be reengineered for new scenarios during the optimization process, leading to reduced computational efficiency. In contrast, model-free methods can obtain incremental knowledge via trial-and-error in the training phase, and output energy management scheme rapidly. However, ensuring the safety of the scheme during the training phases poses significant challenges. To address these challenges, we propose a safe reinforcement learning (SRL) framework. The proposed SRL framework initially includes a safety assessment optimization model (SAOM) to evaluate scheme constraints and refine unsafe schemes for ensuring MG safety. Subsequently, based on SAOM, the MG energy management issue is formulated as an assess-based constrained Markov decision process (A-CMDP), enabling the SRL can be adopted in this issue. After that, we adopt a Lyapunov-based safety policy optimization for agent policy learning to ensure that policy updates are confined within a safe boundary, theoretically ensuring the safety of the MG throughout the learning process. Numerical studies highlight the superior performance of our proposed method. Specifically, the SRL framework effectively learns energy management policy, ensures MG safety, and demonstrates outstanding outcomes in the economic operation of MG.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.