{"title":"Policy Iteration Reinforcement Learning Method for Continuous-Time Linear–Quadratic Mean-Field Control Problems","authors":"Na Li;Xun Li;Zuo Quan Xu","doi":"10.1109/TAC.2024.3494656","DOIUrl":null,"url":null,"abstract":"In this article, we employ a policy iteration reinforcement learning (RL) method to study continuous-time linear–quadratic mean-field control problems in infinite horizon. The drift and diffusion terms in the dynamics involve the states, the controls, and their conditional expectations. We investigate the stabilizability and convergence of the RL algorithm using a Lyapunov recursion. Instead of solving a pair of coupled Riccati equations, the RL technique focuses on strengthening an auxiliary function and the cost functional as the objective functions and updating the new policy to compute the optimal control via state trajectories. A numerical example sheds light on the established theoretical results.","PeriodicalId":13201,"journal":{"name":"IEEE Transactions on Automatic Control","volume":"70 4","pages":"2690-2697"},"PeriodicalIF":7.0000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automatic Control","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10747285/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In this article, we employ a policy iteration reinforcement learning (RL) method to study continuous-time linear–quadratic mean-field control problems in infinite horizon. The drift and diffusion terms in the dynamics involve the states, the controls, and their conditional expectations. We investigate the stabilizability and convergence of the RL algorithm using a Lyapunov recursion. Instead of solving a pair of coupled Riccati equations, the RL technique focuses on strengthening an auxiliary function and the cost functional as the objective functions and updating the new policy to compute the optimal control via state trajectories. A numerical example sheds light on the established theoretical results.
期刊介绍:
In the IEEE Transactions on Automatic Control, the IEEE Control Systems Society publishes high-quality papers on the theory, design, and applications of control engineering. Two types of contributions are regularly considered:
1) Papers: Presentation of significant research, development, or application of control concepts.
2) Technical Notes and Correspondence: Brief technical notes, comments on published areas or established control topics, corrections to papers and notes published in the Transactions.
In addition, special papers (tutorials, surveys, and perspectives on the theory and applications of control systems topics) are solicited.