{"title":"Reinforcement Learning for H∞ Optimal Control of Unknown Continuous-Time Linear Systems","authors":"Hongyang Li;Qinglai Wei;Xiangmin Tan","doi":"10.1109/TCYB.2025.3541815","DOIUrl":null,"url":null,"abstract":"Designing the optimal control for the practical systems is challenging due to the unknown system dynamics and unavoidable external disturbances. In this article, the <inline-formula> <tex-math>$H_{\\infty } $ </tex-math></inline-formula> optimal control problem is investigated for continuous-time linear systems with unknown dynamics. The existing reinforcement learning-based <inline-formula> <tex-math>$H_{\\infty } $ </tex-math></inline-formula> optimal control methods require persistence of excitation (PE) condition or data storage mechanism to guarantee the convergence of the algorithms. However, PE condition is hard to be monitored online and data storage mechanism requires to store huge amounts of past system data. In order to solve these problems, the initial excitation-based reinforcement learning algorithms are presented to learn the optimal control policy under an online-verifiable initial excitation condition. The properties of the initial excitation-based reinforcement learning algorithms are analyzed, which show that the presented algorithms converge to the optimum under the initial excitation condition. Numerical analysis is provided which demonstrates the correctness of the presented algorithms.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 5","pages":"2379-2389"},"PeriodicalIF":10.5000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10908416/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Designing the optimal control for the practical systems is challenging due to the unknown system dynamics and unavoidable external disturbances. In this article, the $H_{\infty } $ optimal control problem is investigated for continuous-time linear systems with unknown dynamics. The existing reinforcement learning-based $H_{\infty } $ optimal control methods require persistence of excitation (PE) condition or data storage mechanism to guarantee the convergence of the algorithms. However, PE condition is hard to be monitored online and data storage mechanism requires to store huge amounts of past system data. In order to solve these problems, the initial excitation-based reinforcement learning algorithms are presented to learn the optimal control policy under an online-verifiable initial excitation condition. The properties of the initial excitation-based reinforcement learning algorithms are analyzed, which show that the presented algorithms converge to the optimum under the initial excitation condition. Numerical analysis is provided which demonstrates the correctness of the presented algorithms.
期刊介绍:
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.