Reinforcement Learning H∞ Optimal Formation Control for Perturbed Multiagent Systems With Nonlinear Faults

IF 8.7 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Systems Man Cybernetics-Systems Pub Date : 2024-12-24 DOI:10.1109/TSMC.2024.3516048
Yuxia Wu;Hongjing Liang;Shuxing Xuan;Choon Ki Ahn
{"title":"Reinforcement Learning H∞ Optimal Formation Control for Perturbed Multiagent Systems With Nonlinear Faults","authors":"Yuxia Wu;Hongjing Liang;Shuxing Xuan;Choon Ki Ahn","doi":"10.1109/TSMC.2024.3516048","DOIUrl":null,"url":null,"abstract":"This article presents an optimal formation control strategy for multiagent systems based on a reinforcement learning (RL) technique, considering prescribed performance and unknown nonlinear faults. To optimize the control performance, an RL strategy is introduced based on the identifier-critic–actor-disturbance structure and backstepping frame. The identifier, critic, actor, and disturbance neural networks (NNs) are employed to estimate unknown dynamics, assess system performance, carry out control actions, and derive the worst disturbance strategy, respectively. With the scheme, the persistent excitation requirements are removed by adopting simplified NNs updating laws, which are derived using the gradient descent method toward designed positive functions instead of the square of Bellman residual. For achieving the desired error precision within the prescribed time, a constraining function and an error transformation scheme are employed. In addition, to enhance the system’s robustness, a fault observer is utilized to compensate for the impact of the unknown nonlinear faults. The stability of the closed-loop system is assured, while the prescribed performance is realized. Finally, simulation examples validate the effectiveness of the proposed optimal control strategy.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"55 3","pages":"1935-1947"},"PeriodicalIF":8.7000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814669/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This article presents an optimal formation control strategy for multiagent systems based on a reinforcement learning (RL) technique, considering prescribed performance and unknown nonlinear faults. To optimize the control performance, an RL strategy is introduced based on the identifier-critic–actor-disturbance structure and backstepping frame. The identifier, critic, actor, and disturbance neural networks (NNs) are employed to estimate unknown dynamics, assess system performance, carry out control actions, and derive the worst disturbance strategy, respectively. With the scheme, the persistent excitation requirements are removed by adopting simplified NNs updating laws, which are derived using the gradient descent method toward designed positive functions instead of the square of Bellman residual. For achieving the desired error precision within the prescribed time, a constraining function and an error transformation scheme are employed. In addition, to enhance the system’s robustness, a fault observer is utilized to compensate for the impact of the unknown nonlinear faults. The stability of the closed-loop system is assured, while the prescribed performance is realized. Finally, simulation examples validate the effectiveness of the proposed optimal control strategy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
具有非线性故障的扰动多智能体系统的强化学习H∞最优群控制
本文提出了一种基于强化学习(RL)技术的多智能体系统最优编队控制策略,考虑了预定性能和未知非线性故障。为了优化控制性能,提出了一种基于辨识器-临界-因子-扰动结构和反步框架的强化学习策略。辨识器、批判者、行动者和干扰神经网络(NNs)分别用于估计未知动态、评估系统性能、执行控制动作和推导最坏干扰策略。该方案采用简化的神经网络更新规律,以梯度下降法代替Bellman残差的平方,对设计的正函数推导出持久的激励要求。为了在规定的时间内达到期望的误差精度,采用了约束函数和误差变换方案。此外,为了提高系统的鲁棒性,利用故障观测器补偿未知非线性故障的影响。保证了闭环系统的稳定性,同时实现了规定的性能要求。最后,通过仿真实例验证了所提最优控制策略的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Systems Man Cybernetics-Systems
IEEE Transactions on Systems Man Cybernetics-Systems AUTOMATION & CONTROL SYSTEMS-COMPUTER SCIENCE, CYBERNETICS
CiteScore
18.50
自引率
11.50%
发文量
812
审稿时长
6 months
期刊介绍: The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.
期刊最新文献
Introducing IEEE Collabratec IEEE Systems, Man, and Cybernetics Society Information TechRxiv: Share Your Preprint Research With the World! Reinforcement Learning-Based Optimized Adaptive Secure Control for Constrained Fractional-Order Nonlinear Systems Under FDI Attacks Learning Multilayer Feature Projection for Homogeneous and Heterogeneous Palmprint Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1