Policy Iteration Reinforcement Learning Method for Continuous-Time Linear–Quadratic Mean-Field Control Problems

IF 7 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Automatic Control Pub Date : 2024-11-07 DOI:10.1109/TAC.2024.3494656
Na Li;Xun Li;Zuo Quan Xu
{"title":"Policy Iteration Reinforcement Learning Method for Continuous-Time Linear–Quadratic Mean-Field Control Problems","authors":"Na Li;Xun Li;Zuo Quan Xu","doi":"10.1109/TAC.2024.3494656","DOIUrl":null,"url":null,"abstract":"In this article, we employ a policy iteration reinforcement learning (RL) method to study continuous-time linear–quadratic mean-field control problems in infinite horizon. The drift and diffusion terms in the dynamics involve the states, the controls, and their conditional expectations. We investigate the stabilizability and convergence of the RL algorithm using a Lyapunov recursion. Instead of solving a pair of coupled Riccati equations, the RL technique focuses on strengthening an auxiliary function and the cost functional as the objective functions and updating the new policy to compute the optimal control via state trajectories. A numerical example sheds light on the established theoretical results.","PeriodicalId":13201,"journal":{"name":"IEEE Transactions on Automatic Control","volume":"70 4","pages":"2690-2697"},"PeriodicalIF":7.0000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automatic Control","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10747285/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In this article, we employ a policy iteration reinforcement learning (RL) method to study continuous-time linear–quadratic mean-field control problems in infinite horizon. The drift and diffusion terms in the dynamics involve the states, the controls, and their conditional expectations. We investigate the stabilizability and convergence of the RL algorithm using a Lyapunov recursion. Instead of solving a pair of coupled Riccati equations, the RL technique focuses on strengthening an auxiliary function and the cost functional as the objective functions and updating the new policy to compute the optimal control via state trajectories. A numerical example sheds light on the established theoretical results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
连续时间线性-二次均场控制问题的策略迭代强化学习法
本文采用策略迭代强化学习(RL)方法研究了无限视界上的连续时间线性二次平均场控制问题。动力学中的漂移和扩散项涉及状态、控制及其条件期望。我们使用Lyapunov递归研究RL算法的稳定性和收敛性。RL技术不是求解一对耦合的Riccati方程,而是着重于强化辅助函数和代价函数作为目标函数,并更新新策略以通过状态轨迹计算最优控制。数值算例说明了已建立的理论结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Automatic Control
IEEE Transactions on Automatic Control 工程技术-工程:电子与电气
CiteScore
11.30
自引率
5.90%
发文量
824
审稿时长
9 months
期刊介绍: In the IEEE Transactions on Automatic Control, the IEEE Control Systems Society publishes high-quality papers on the theory, design, and applications of control engineering. Two types of contributions are regularly considered: 1) Papers: Presentation of significant research, development, or application of control concepts. 2) Technical Notes and Correspondence: Brief technical notes, comments on published areas or established control topics, corrections to papers and notes published in the Transactions. In addition, special papers (tutorials, surveys, and perspectives on the theory and applications of control systems topics) are solicited.
期刊最新文献
Feedback Regulation for Max-Plus Linear Systems Distributed Control Lyapunov and Control Barrier Certificates for Reactive Formation Innovation-Based Stealthy Attacks on Remote State Estimators Using the Wasserstein Distance Verification of Resilient Current-State Opacity Against Dynamic Sensor Enabling Attacks Signed Angle Rigid Graphs for Network Localization and Formation Control
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1