Yunhe Wei, Al-Amin B. Bugaje, Federica Bellizio, G. Strbac
{"title":"Reinforcement Learning Based Optimal Load Shedding for Transient Stabilization","authors":"Yunhe Wei, Al-Amin B. Bugaje, Federica Bellizio, G. Strbac","doi":"10.1109/ISGT-Europe54678.2022.9960657","DOIUrl":null,"url":null,"abstract":"Power system stability is one of the crucial parts of power system operation. In combination with preventive control, corrective control can enhance the power system stability while reducing preventive control costs and increase grid asset utilization. However, it is hard to quantitatively determine the most cost-effective corrective control strategy in the short-time following the faults when there are transient conditions. In the proposed approach, reinforcement learning using a Deep Q Network is used to fast determine the optimized load shedding for different operating conditions to maintain the system stability following faults. A case study on the IEEE 9 bus system is used to test the proposed approach, showing promising performance in terms of accuracy, costs, and reduction in computational times when compared to existing approaches.","PeriodicalId":311595,"journal":{"name":"2022 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISGT-Europe54678.2022.9960657","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Power system stability is one of the crucial parts of power system operation. In combination with preventive control, corrective control can enhance the power system stability while reducing preventive control costs and increase grid asset utilization. However, it is hard to quantitatively determine the most cost-effective corrective control strategy in the short-time following the faults when there are transient conditions. In the proposed approach, reinforcement learning using a Deep Q Network is used to fast determine the optimized load shedding for different operating conditions to maintain the system stability following faults. A case study on the IEEE 9 bus system is used to test the proposed approach, showing promising performance in terms of accuracy, costs, and reduction in computational times when compared to existing approaches.