Rajarshi Roychowdhury, John B. Ocampo, Balaji Guddanti, M. Illindala
{"title":"基于深度强化学习的变电站拓扑和线路交换控制","authors":"Rajarshi Roychowdhury, John B. Ocampo, Balaji Guddanti, M. Illindala","doi":"10.1109/ICPS54075.2022.9773937","DOIUrl":null,"url":null,"abstract":"Electric Power System (EPS) is widely regarded as one of the most complex artificial systems ever created. With the recent penetration of distributed energy resources, controlling the power systems is becoming even more challenging. This paper presents the use of the Dueling DQN (DDQN) Reinforcement Learning algorithm to control line switching and substation topology of the EPS to maintain line flow within limits for all contingency scenarios. The DDQN algorithm is particularly suited in power systems as often, the state of the environment might not be widely affected due to an agent’s actions, particularly during normal operating conditions. This allows the DDQN agent to quickly learn the states that are not important - a definite advantage over traditional vanilla Deep Q Networks. In the case of real-time control of the EPS, not learning all the redundant states has the advantage of fast convergence and reduced training time, both highly desirable in a complex use case like the one studied. The DDQN algorithm was tested on the standard IEEE 14 bus system, and the agent managed to maintain system stability under varied grid operating scenarios.","PeriodicalId":428784,"journal":{"name":"2022 IEEE/IAS 58th Industrial and Commercial Power Systems Technical Conference (I&CPS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Substation Topology and Line Switching Control Using Deep Reinforcement Learning\",\"authors\":\"Rajarshi Roychowdhury, John B. Ocampo, Balaji Guddanti, M. Illindala\",\"doi\":\"10.1109/ICPS54075.2022.9773937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Electric Power System (EPS) is widely regarded as one of the most complex artificial systems ever created. With the recent penetration of distributed energy resources, controlling the power systems is becoming even more challenging. This paper presents the use of the Dueling DQN (DDQN) Reinforcement Learning algorithm to control line switching and substation topology of the EPS to maintain line flow within limits for all contingency scenarios. The DDQN algorithm is particularly suited in power systems as often, the state of the environment might not be widely affected due to an agent’s actions, particularly during normal operating conditions. This allows the DDQN agent to quickly learn the states that are not important - a definite advantage over traditional vanilla Deep Q Networks. In the case of real-time control of the EPS, not learning all the redundant states has the advantage of fast convergence and reduced training time, both highly desirable in a complex use case like the one studied. The DDQN algorithm was tested on the standard IEEE 14 bus system, and the agent managed to maintain system stability under varied grid operating scenarios.\",\"PeriodicalId\":428784,\"journal\":{\"name\":\"2022 IEEE/IAS 58th Industrial and Commercial Power Systems Technical Conference (I&CPS)\",\"volume\":\"70 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/IAS 58th Industrial and Commercial Power Systems Technical Conference (I&CPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPS54075.2022.9773937\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/IAS 58th Industrial and Commercial Power Systems Technical Conference (I&CPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPS54075.2022.9773937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Substation Topology and Line Switching Control Using Deep Reinforcement Learning
Electric Power System (EPS) is widely regarded as one of the most complex artificial systems ever created. With the recent penetration of distributed energy resources, controlling the power systems is becoming even more challenging. This paper presents the use of the Dueling DQN (DDQN) Reinforcement Learning algorithm to control line switching and substation topology of the EPS to maintain line flow within limits for all contingency scenarios. The DDQN algorithm is particularly suited in power systems as often, the state of the environment might not be widely affected due to an agent’s actions, particularly during normal operating conditions. This allows the DDQN agent to quickly learn the states that are not important - a definite advantage over traditional vanilla Deep Q Networks. In the case of real-time control of the EPS, not learning all the redundant states has the advantage of fast convergence and reduced training time, both highly desirable in a complex use case like the one studied. The DDQN algorithm was tested on the standard IEEE 14 bus system, and the agent managed to maintain system stability under varied grid operating scenarios.