{"title":"Stability-Guided Reinforcement Learning Control for Power Converters: A Lyapunov Approach","authors":"Yihao Wan;Qianwen Xu","doi":"10.1109/TIE.2024.3522491","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) has gained popularity in power electronics due to its ability to handle nonlinearities and self-learning characteristics. When properly configured, an RL agent can autonomously learn the optimal control policy by interacting with the converter system. In particular, similar to conventional finite-control-set model predictive control (FCS-MPC), the RL agent can learn the optimal switching strategy for the power converter and achieve desirable control performance. However, the alteration of closed-loop dynamics by the RL controller poses challenges in ensuring and assessing system stability. To address this, the article proposes formulating a Lyapunov function to guide the agent in learning an optimal control policy that enhances desirable control performance while ensuring closed-loop stability. Additionally, the practical stability region of the system is quantified by deriving a compact set regarding the convergence of voltage control error. Finally, the proposed Lyapunov-guided RL controller is validated through a demonstration framework with a practical experimental setup. Both simulation and experimental results confirm the effectiveness of the proposed method.","PeriodicalId":13402,"journal":{"name":"IEEE Transactions on Industrial Electronics","volume":"72 7","pages":"7553-7562"},"PeriodicalIF":7.2000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Industrial Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10820008/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement learning (RL) has gained popularity in power electronics due to its ability to handle nonlinearities and self-learning characteristics. When properly configured, an RL agent can autonomously learn the optimal control policy by interacting with the converter system. In particular, similar to conventional finite-control-set model predictive control (FCS-MPC), the RL agent can learn the optimal switching strategy for the power converter and achieve desirable control performance. However, the alteration of closed-loop dynamics by the RL controller poses challenges in ensuring and assessing system stability. To address this, the article proposes formulating a Lyapunov function to guide the agent in learning an optimal control policy that enhances desirable control performance while ensuring closed-loop stability. Additionally, the practical stability region of the system is quantified by deriving a compact set regarding the convergence of voltage control error. Finally, the proposed Lyapunov-guided RL controller is validated through a demonstration framework with a practical experimental setup. Both simulation and experimental results confirm the effectiveness of the proposed method.
期刊介绍:
Journal Name: IEEE Transactions on Industrial Electronics
Publication Frequency: Monthly
Scope:
The scope of IEEE Transactions on Industrial Electronics encompasses the following areas:
Applications of electronics, controls, and communications in industrial and manufacturing systems and processes.
Power electronics and drive control techniques.
System control and signal processing.
Fault detection and diagnosis.
Power systems.
Instrumentation, measurement, and testing.
Modeling and simulation.
Motion control.
Robotics.
Sensors and actuators.
Implementation of neural networks, fuzzy logic, and artificial intelligence in industrial systems.
Factory automation.
Communication and computer networks.