Optimal Control for Multi-agent Systems Using Off-Policy Reinforcement Learning

Hao Wang, Zhiru Chen, Jun Wang, Lijun Lu, Mingzhe Li
{"title":"Optimal Control for Multi-agent Systems Using Off-Policy Reinforcement Learning","authors":"Hao Wang, Zhiru Chen, Jun Wang, Lijun Lu, Mingzhe Li","doi":"10.1109/ICCR55715.2022.10053883","DOIUrl":null,"url":null,"abstract":"To achieve the consensus for discrete-time multi-agent systems, an optimal control policy is designed based on off-policy reinforcement learning. By utilizing centralized learning and decentralized execution, we first define a centralized and shared value function. Then, a value iteration adaptive dynamic programming method is proposed to approach the solution of the Bellman optimality equation with convergence analysis. Furthermore, the actor-critic structure is given for the implementation purpose, where one single-critic network is given to approach the optimal centralized value function, and multi-actor networks are decentralized based on the local observation from the neighbors to obtain the optimal policy for each agent. Finally, the proposed algorithm is verified in a leader-follower consensus case.","PeriodicalId":441511,"journal":{"name":"2022 4th International Conference on Control and Robotics (ICCR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Control and Robotics (ICCR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCR55715.2022.10053883","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

To achieve the consensus for discrete-time multi-agent systems, an optimal control policy is designed based on off-policy reinforcement learning. By utilizing centralized learning and decentralized execution, we first define a centralized and shared value function. Then, a value iteration adaptive dynamic programming method is proposed to approach the solution of the Bellman optimality equation with convergence analysis. Furthermore, the actor-critic structure is given for the implementation purpose, where one single-critic network is given to approach the optimal centralized value function, and multi-actor networks are decentralized based on the local observation from the neighbors to obtain the optimal policy for each agent. Finally, the proposed algorithm is verified in a leader-follower consensus case.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于非策略强化学习的多智能体系统最优控制
为了实现离散多智能体系统的一致性,设计了一种基于非策略强化学习的最优控制策略。通过集中学习和分散执行,我们首先定义了一个集中共享的价值函数。然后,提出了一种值迭代自适应动态规划方法,利用收敛性分析逼近Bellman最优性方程的解。此外,为了实现目标,给出了参与者-批评者结构,其中给出了一个单批评者网络来接近最优集中值函数,而多参与者网络基于邻居的局部观察来分散,以获得每个代理的最优策略。最后,在领导-追随者共识情况下验证了该算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Mobile Humanoid Robot Control through Object Movement Imagery Optimization of Two-end Access Platform Automated Warehouse Storage Allocation Long-Tailed Object Mining Based on CLIP Model for Autonomous Driving Node Deployment and Energy Saving Optimization Method for Wireless Sensor Networks Based on Q-learning Off-policy Q-learning-based Tracking Control for Stochastic Linear Discrete-Time Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1