X. Shang, Ye Lin, Jing Zhang, Jingping Yang, Jianping Xu, Qin Lyu, R. Diao
{"title":"Reinforcement Learning-Based Solution to Power Grid Planning and Operation Under Uncertainties","authors":"X. Shang, Ye Lin, Jing Zhang, Jingping Yang, Jianping Xu, Qin Lyu, R. Diao","doi":"10.1109/MLHPCAI4S51975.2020.00015","DOIUrl":null,"url":null,"abstract":"With the ever-increasing stochastic and dynamic behavior observed in today’s bulk power systems, securely and economically planning future operational scenarios that meet all reliability standards under uncertainties becomes a challenging computational task, which typically involves searching feasible and suboptimal solutions in a highly dimensional space via massive numerical simulations. This paper presents a novel approach to achieving this goal by adopting the state-of-the-art reinforcement learning algorithm, Soft Actor Critic (SAC). First, the optimization problem of finding feasible solutions under uncertainties is formulated as Markov Decision Process (MDP). Second, a general and flexible framework is developed to train SAC agent by adjusting generator active power outputs for searching feasible operating conditions. A software prototype is developed that verifies the effectiveness of the proposed approach via numerical studies conducted on the planning cases of the SGCC Zhejiang Electric Power Company.","PeriodicalId":47667,"journal":{"name":"Foundations and Trends in Machine Learning","volume":"18 1","pages":"72-79"},"PeriodicalIF":65.3000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations and Trends in Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLHPCAI4S51975.2020.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1
Abstract
With the ever-increasing stochastic and dynamic behavior observed in today’s bulk power systems, securely and economically planning future operational scenarios that meet all reliability standards under uncertainties becomes a challenging computational task, which typically involves searching feasible and suboptimal solutions in a highly dimensional space via massive numerical simulations. This paper presents a novel approach to achieving this goal by adopting the state-of-the-art reinforcement learning algorithm, Soft Actor Critic (SAC). First, the optimization problem of finding feasible solutions under uncertainties is formulated as Markov Decision Process (MDP). Second, a general and flexible framework is developed to train SAC agent by adjusting generator active power outputs for searching feasible operating conditions. A software prototype is developed that verifies the effectiveness of the proposed approach via numerical studies conducted on the planning cases of the SGCC Zhejiang Electric Power Company.
期刊介绍:
Each issue of Foundations and Trends® in Machine Learning comprises a monograph of at least 50 pages written by research leaders in the field. We aim to publish monographs that provide an in-depth, self-contained treatment of topics where there have been significant new developments. Typically, this means that the monographs we publish will contain a significant level of mathematical detail (to describe the central methods and/or theory for the topic at hand), and will not eschew these details by simply pointing to existing references. Literature surveys and original research papers do not fall within these aims.