{"title":"Reinforcement Learning-Based H<sub>∞</sub> Control of 2-D Markov Jump Roesser Systems With Optimal Disturbance Attenuation.","authors":"Jiacheng Wu, Bosen Lian, Hongye Su, Yang Zhu","doi":"10.1109/TNNLS.2024.3487760","DOIUrl":null,"url":null,"abstract":"<p><p>This article investigates model-free reinforcement learning (RL)-based H<sub>∞</sub> control problem for discrete-time 2-D Markov jump Roesser systems ( 2 -D MJRSs) with optimal disturbance attenuation level. This is compared to existing studies on H<sub>∞</sub> control of 2-D MJRSs with optimal disturbance attenuation levels that are off-line and use full system dynamics. We design a comprehensive model-free RL algorithm to solve optimal H<sub>∞</sub> control policy, optimize disturbance attenuation level, and search for the initial stabilizing control policy, via online horizontal and vertical data along 2-D MJRSs trajectories. The optimal disturbance attenuation level is obtained by solving a set of linear matrix inequalities based on online measurement data. The initial stabilizing control policy is obtained via a data-driven parallel value iteration (VI) algorithm. Besides, we further certify the performance including the convergence of the RL algorithm and the asymptotic mean-square stability of the closed-loop systems. Finally, simulation results and comparisons demonstrate the effectiveness of the proposed algorithms.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2024.3487760","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This article investigates model-free reinforcement learning (RL)-based H∞ control problem for discrete-time 2-D Markov jump Roesser systems ( 2 -D MJRSs) with optimal disturbance attenuation level. This is compared to existing studies on H∞ control of 2-D MJRSs with optimal disturbance attenuation levels that are off-line and use full system dynamics. We design a comprehensive model-free RL algorithm to solve optimal H∞ control policy, optimize disturbance attenuation level, and search for the initial stabilizing control policy, via online horizontal and vertical data along 2-D MJRSs trajectories. The optimal disturbance attenuation level is obtained by solving a set of linear matrix inequalities based on online measurement data. The initial stabilizing control policy is obtained via a data-driven parallel value iteration (VI) algorithm. Besides, we further certify the performance including the convergence of the RL algorithm and the asymptotic mean-square stability of the closed-loop systems. Finally, simulation results and comparisons demonstrate the effectiveness of the proposed algorithms.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.