Congbo Bi , Di Liu , Lipeng Zhu , Chao Lu , Shiyang Li , Yingqi Tang
{"title":"Reactive power optimization via deep transfer reinforcement learning for efficient adaptation to multiple scenarios","authors":"Congbo Bi , Di Liu , Lipeng Zhu , Chao Lu , Shiyang Li , Yingqi Tang","doi":"10.1016/j.ijepes.2024.110376","DOIUrl":null,"url":null,"abstract":"<div><div>Fast reactive power optimization policy-making for various operating scenarios is an important part of power system dispatch. Existing reinforcement learning algorithms alleviate the computational complexity in optimization but suffer from the inefficiency of model retraining for different operating scenarios. To solve the above problems, this paper raises a data-efficient transfer reinforcement learning-based reactive power optimization framework. The proposed framework transfers knowledge through two phases: generic state representation in the original scenario and specific dynamic learning in multiple target scenarios. A Q-network structure that separately extracts state and action dynamics is designed to learn generalizable state representations and enable generic knowledge transfer. Supervised learning is applied in specific dynamic learning for extracting unique dynamics from offline data, which improves data efficiency and speeds up knowledge transfer. Finally, the proposed framework is tested on the IEEE 39-bus system and the realistic Guangdong provincial power grid, demonstrating its effectiveness and reliability.</div></div>","PeriodicalId":50326,"journal":{"name":"International Journal of Electrical Power & Energy Systems","volume":"164 ","pages":"Article 110376"},"PeriodicalIF":5.0000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Electrical Power & Energy Systems","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0142061524005994","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Fast reactive power optimization policy-making for various operating scenarios is an important part of power system dispatch. Existing reinforcement learning algorithms alleviate the computational complexity in optimization but suffer from the inefficiency of model retraining for different operating scenarios. To solve the above problems, this paper raises a data-efficient transfer reinforcement learning-based reactive power optimization framework. The proposed framework transfers knowledge through two phases: generic state representation in the original scenario and specific dynamic learning in multiple target scenarios. A Q-network structure that separately extracts state and action dynamics is designed to learn generalizable state representations and enable generic knowledge transfer. Supervised learning is applied in specific dynamic learning for extracting unique dynamics from offline data, which improves data efficiency and speeds up knowledge transfer. Finally, the proposed framework is tested on the IEEE 39-bus system and the realistic Guangdong provincial power grid, demonstrating its effectiveness and reliability.
期刊介绍:
The journal covers theoretical developments in electrical power and energy systems and their applications. The coverage embraces: generation and network planning; reliability; long and short term operation; expert systems; neural networks; object oriented systems; system control centres; database and information systems; stock and parameter estimation; system security and adequacy; network theory, modelling and computation; small and large system dynamics; dynamic model identification; on-line control including load and switching control; protection; distribution systems; energy economics; impact of non-conventional systems; and man-machine interfaces.
As well as original research papers, the journal publishes short contributions, book reviews and conference reports. All papers are peer-reviewed by at least two referees.