Reinforcement Learning for H∞ Optimal Control of Unknown Continuous-Time Linear Systems

IF 10.5 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Cybernetics Pub Date : 2025-02-28 DOI:10.1109/TCYB.2025.3541815
Hongyang Li;Qinglai Wei;Xiangmin Tan
{"title":"Reinforcement Learning for H∞ Optimal Control of Unknown Continuous-Time Linear Systems","authors":"Hongyang Li;Qinglai Wei;Xiangmin Tan","doi":"10.1109/TCYB.2025.3541815","DOIUrl":null,"url":null,"abstract":"Designing the optimal control for the practical systems is challenging due to the unknown system dynamics and unavoidable external disturbances. In this article, the <inline-formula> <tex-math>$H_{\\infty } $ </tex-math></inline-formula> optimal control problem is investigated for continuous-time linear systems with unknown dynamics. The existing reinforcement learning-based <inline-formula> <tex-math>$H_{\\infty } $ </tex-math></inline-formula> optimal control methods require persistence of excitation (PE) condition or data storage mechanism to guarantee the convergence of the algorithms. However, PE condition is hard to be monitored online and data storage mechanism requires to store huge amounts of past system data. In order to solve these problems, the initial excitation-based reinforcement learning algorithms are presented to learn the optimal control policy under an online-verifiable initial excitation condition. The properties of the initial excitation-based reinforcement learning algorithms are analyzed, which show that the presented algorithms converge to the optimum under the initial excitation condition. Numerical analysis is provided which demonstrates the correctness of the presented algorithms.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 5","pages":"2379-2389"},"PeriodicalIF":10.5000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10908416/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Designing the optimal control for the practical systems is challenging due to the unknown system dynamics and unavoidable external disturbances. In this article, the $H_{\infty } $ optimal control problem is investigated for continuous-time linear systems with unknown dynamics. The existing reinforcement learning-based $H_{\infty } $ optimal control methods require persistence of excitation (PE) condition or data storage mechanism to guarantee the convergence of the algorithms. However, PE condition is hard to be monitored online and data storage mechanism requires to store huge amounts of past system data. In order to solve these problems, the initial excitation-based reinforcement learning algorithms are presented to learn the optimal control policy under an online-verifiable initial excitation condition. The properties of the initial excitation-based reinforcement learning algorithms are analyzed, which show that the presented algorithms converge to the optimum under the initial excitation condition. Numerical analysis is provided which demonstrates the correctness of the presented algorithms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
未知连续时间线性系统H∞最优控制的强化学习
由于未知的系统动力学和不可避免的外部干扰,设计实际系统的最优控制具有挑战性。本文研究了具有未知动力学的连续线性系统的$H_{\infty } $最优控制问题。现有的基于强化学习的$H_{\infty } $最优控制方法需要激励条件的持久性或数据存储机制来保证算法的收敛性。然而,PE状态难以在线监控,数据存储机制需要存储大量的过去系统数据。为了解决这些问题,提出了基于初始激励的强化学习算法,在在线可验证的初始激励条件下学习最优控制策略。分析了基于初始激励的强化学习算法的特性,表明所提算法在初始激励条件下收敛到最优。数值分析表明了所提算法的正确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Cybernetics
IEEE Transactions on Cybernetics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
25.40
自引率
11.00%
发文量
1869
期刊介绍: The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.
期刊最新文献
Output Consensus of a Class of Multiple Heterogeneous-Dimensional Switched Nonlinear Systems Controllability Robustness of Simplicial Complexes Aleatoric-Epistemic Joint Uncertainty Modeling for Cross-Modal Retrieval A Novel Approach for Accurate SOC Estimation of Lithium-Ion Electric Vehicle Batteries Using a (Q, S, R)-γ-Based Dissipativity Observer. Adjustable-Error-Based Adaptive Neural Network Tracking Control for Uncertain Nonlinear Systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1