{"title":"基于安全深度强化学习的微电网在线能量管理","authors":"Hepeng Li, Zhenhua Wang, Lusi Li, Haibo He","doi":"10.1109/SSCI50451.2021.9659545","DOIUrl":null,"url":null,"abstract":"Microgrids provide power systems with an effective manner to integrate distributed energy resources, increase power supply reliability, and reduce operational cost. However, intermittent renewable energy resources (RESs) makes it challenging to operate a microgrid safely and economically based on forecasting. To overcome this issue, we develop an online energy management approach for efficient microgrid operation using safe deep reinforcement learning (SDRL). By considering uncertainties and AC power flow, the proposed method formulates online microgrid energy management as a constrained Markov decision process (CMDP). The objective is to find a safety-guaranteed scheduling policy to minimize the total operational cost. To achieve this, we use a SDRL method to learn a neural network-based policy based on constrained policy optimization (CPO). Different from tradition DRL methods that allow an agent to freely explore any behavior during training, the proposed method limits the exploration to safe policies that satisfy AC power flow constraints during training. The proposed method is model-free and does not require predictive information or explicit model of the microgrid. The proposed method is trained and tested on a medium voltage distribution network with real-world power grid data from California Independent Operator (CAISO). Simulation results verify the effectiveness and superiority of proposed method over traditional DRL approaches.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Online Microgrid Energy Management Based on Safe Deep Reinforcement Learning\",\"authors\":\"Hepeng Li, Zhenhua Wang, Lusi Li, Haibo He\",\"doi\":\"10.1109/SSCI50451.2021.9659545\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Microgrids provide power systems with an effective manner to integrate distributed energy resources, increase power supply reliability, and reduce operational cost. However, intermittent renewable energy resources (RESs) makes it challenging to operate a microgrid safely and economically based on forecasting. To overcome this issue, we develop an online energy management approach for efficient microgrid operation using safe deep reinforcement learning (SDRL). By considering uncertainties and AC power flow, the proposed method formulates online microgrid energy management as a constrained Markov decision process (CMDP). The objective is to find a safety-guaranteed scheduling policy to minimize the total operational cost. To achieve this, we use a SDRL method to learn a neural network-based policy based on constrained policy optimization (CPO). Different from tradition DRL methods that allow an agent to freely explore any behavior during training, the proposed method limits the exploration to safe policies that satisfy AC power flow constraints during training. The proposed method is model-free and does not require predictive information or explicit model of the microgrid. The proposed method is trained and tested on a medium voltage distribution network with real-world power grid data from California Independent Operator (CAISO). Simulation results verify the effectiveness and superiority of proposed method over traditional DRL approaches.\",\"PeriodicalId\":255763,\"journal\":{\"name\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"96 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI50451.2021.9659545\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9659545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Online Microgrid Energy Management Based on Safe Deep Reinforcement Learning
Microgrids provide power systems with an effective manner to integrate distributed energy resources, increase power supply reliability, and reduce operational cost. However, intermittent renewable energy resources (RESs) makes it challenging to operate a microgrid safely and economically based on forecasting. To overcome this issue, we develop an online energy management approach for efficient microgrid operation using safe deep reinforcement learning (SDRL). By considering uncertainties and AC power flow, the proposed method formulates online microgrid energy management as a constrained Markov decision process (CMDP). The objective is to find a safety-guaranteed scheduling policy to minimize the total operational cost. To achieve this, we use a SDRL method to learn a neural network-based policy based on constrained policy optimization (CPO). Different from tradition DRL methods that allow an agent to freely explore any behavior during training, the proposed method limits the exploration to safe policies that satisfy AC power flow constraints during training. The proposed method is model-free and does not require predictive information or explicit model of the microgrid. The proposed method is trained and tested on a medium voltage distribution network with real-world power grid data from California Independent Operator (CAISO). Simulation results verify the effectiveness and superiority of proposed method over traditional DRL approaches.