Yizhi Wu;Yujian Ye;Jianxiong Hu;Peilin Zhao;Liu Liu;Goran Strbac;Chongqing Kang
{"title":"随机动态优化电力流的机会约束 MDP 计算和贝叶斯优势策略优化","authors":"Yizhi Wu;Yujian Ye;Jianxiong Hu;Peilin Zhao;Liu Liu;Goran Strbac;Chongqing Kang","doi":"10.1109/TPWRS.2024.3430650","DOIUrl":null,"url":null,"abstract":"Although deep reinforcement learning based on Markov Decision Process (MDP) constitutes a well-suited method for real-time control under uncertainties, its application to stochastic dynamic optimal power flow (SDOPF) problem is still challenging in the presence of increasing proliferation of various distributed energy resources, driven by its limitations on constraints satisfaction under uncertainties. While pioneering research explored Constrained MDP and Risk-Aware MDP formulations of SDOPF pursuing cumulative constraint violation minimization, they both struggle with satisfaction of state-wise safety constraints. This letter proposes a Chance Constrained MDP formulation of SDOPF and a Bayesian advantage policy optimization solution method. Bayesian neural networks are used to construct the probability distributions of state- and trajectory-wise constraint violations, and a novel advantage function is incorporated to improve both the policy's quality and safety. Case studies validated the cost-efficiency and comprehensive safety performance of the proposed method against the state-of-the-art.","PeriodicalId":13373,"journal":{"name":"IEEE Transactions on Power Systems","volume":null,"pages":null},"PeriodicalIF":6.5000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Chance Constrained MDP Formulation and Bayesian Advantage Policy Optimization for Stochastic Dynamic Optimal Power Flow\",\"authors\":\"Yizhi Wu;Yujian Ye;Jianxiong Hu;Peilin Zhao;Liu Liu;Goran Strbac;Chongqing Kang\",\"doi\":\"10.1109/TPWRS.2024.3430650\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although deep reinforcement learning based on Markov Decision Process (MDP) constitutes a well-suited method for real-time control under uncertainties, its application to stochastic dynamic optimal power flow (SDOPF) problem is still challenging in the presence of increasing proliferation of various distributed energy resources, driven by its limitations on constraints satisfaction under uncertainties. While pioneering research explored Constrained MDP and Risk-Aware MDP formulations of SDOPF pursuing cumulative constraint violation minimization, they both struggle with satisfaction of state-wise safety constraints. This letter proposes a Chance Constrained MDP formulation of SDOPF and a Bayesian advantage policy optimization solution method. Bayesian neural networks are used to construct the probability distributions of state- and trajectory-wise constraint violations, and a novel advantage function is incorporated to improve both the policy's quality and safety. Case studies validated the cost-efficiency and comprehensive safety performance of the proposed method against the state-of-the-art.\",\"PeriodicalId\":13373,\"journal\":{\"name\":\"IEEE Transactions on Power Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2024-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Power Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10602778/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Power Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10602778/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Chance Constrained MDP Formulation and Bayesian Advantage Policy Optimization for Stochastic Dynamic Optimal Power Flow
Although deep reinforcement learning based on Markov Decision Process (MDP) constitutes a well-suited method for real-time control under uncertainties, its application to stochastic dynamic optimal power flow (SDOPF) problem is still challenging in the presence of increasing proliferation of various distributed energy resources, driven by its limitations on constraints satisfaction under uncertainties. While pioneering research explored Constrained MDP and Risk-Aware MDP formulations of SDOPF pursuing cumulative constraint violation minimization, they both struggle with satisfaction of state-wise safety constraints. This letter proposes a Chance Constrained MDP formulation of SDOPF and a Bayesian advantage policy optimization solution method. Bayesian neural networks are used to construct the probability distributions of state- and trajectory-wise constraint violations, and a novel advantage function is incorporated to improve both the policy's quality and safety. Case studies validated the cost-efficiency and comprehensive safety performance of the proposed method against the state-of-the-art.
期刊介绍:
The scope of IEEE Transactions on Power Systems covers the education, analysis, operation, planning, and economics of electric generation, transmission, and distribution systems for general industrial, commercial, public, and domestic consumption, including the interaction with multi-energy carriers. The focus of this transactions is the power system from a systems viewpoint instead of components of the system. It has five (5) key areas within its scope with several technical topics within each area. These areas are: (1) Power Engineering Education, (2) Power System Analysis, Computing, and Economics, (3) Power System Dynamic Performance, (4) Power System Operations, and (5) Power System Planning and Implementation.