Optimal Tracking Control of Second-Order Multiagent Systems With Input Delay via Data-Driven Forward Reward Q-Learning Framework

IF 8.7 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Systems Man Cybernetics-Systems Pub Date : 2024-12-23 DOI:10.1109/TSMC.2024.3513561
Kai Rao;Huaicheng Yan;Qiwei Liu;Qingmei Dang;Kaibo Shi
{"title":"Optimal Tracking Control of Second-Order Multiagent Systems With Input Delay via Data-Driven Forward Reward Q-Learning Framework","authors":"Kai Rao;Huaicheng Yan;Qiwei Liu;Qingmei Dang;Kaibo Shi","doi":"10.1109/TSMC.2024.3513561","DOIUrl":null,"url":null,"abstract":"In this article, an optimal tracking control algorithm is derived for second-order discrete-time multiagent systems (MASs) with unknown system dynamics and input delay. First, the optimal tracking problem of MASs with input delay is constructed by the tracking error and a local performance index function. By designing a new variable, the original model is converted into a model without delay while guaranteeing the equivalence of performance index and control law of each agent. Subsequently, the transformed model and reinforcement learning (RL) theory are integrated to obtain a novel data-driven distributed learning framework. This framework enables online learning of the optimal control law and ensures tracking consensus of all followers’ position and velocity states. Compared to the traditional actor–critic framework, an additional neural network (NN) is utilized to approximate the forward reward information (FRI) to improve the information learning capability of the MASs. Furthermore, the convergence analysis of system states and three NNs structures are conducted by Lyapunov theory. Finally, the proposed framework is verified to have better convergence and require fewer iteration steps than classical actor–critic framework by numerical simulation comparison experiments.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"55 3","pages":"1858-1869"},"PeriodicalIF":8.7000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10812351/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In this article, an optimal tracking control algorithm is derived for second-order discrete-time multiagent systems (MASs) with unknown system dynamics and input delay. First, the optimal tracking problem of MASs with input delay is constructed by the tracking error and a local performance index function. By designing a new variable, the original model is converted into a model without delay while guaranteeing the equivalence of performance index and control law of each agent. Subsequently, the transformed model and reinforcement learning (RL) theory are integrated to obtain a novel data-driven distributed learning framework. This framework enables online learning of the optimal control law and ensures tracking consensus of all followers’ position and velocity states. Compared to the traditional actor–critic framework, an additional neural network (NN) is utilized to approximate the forward reward information (FRI) to improve the information learning capability of the MASs. Furthermore, the convergence analysis of system states and three NNs structures are conducted by Lyapunov theory. Finally, the proposed framework is verified to have better convergence and require fewer iteration steps than classical actor–critic framework by numerical simulation comparison experiments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于数据驱动前向奖励q学习框架的二阶输入延迟多智能体系统最优跟踪控制
针对具有未知系统动力学和输入时滞的二阶离散多智能体系统,提出了一种最优跟踪控制算法。首先,利用跟踪误差和局部性能指标函数构造了具有输入延迟的质量的最优跟踪问题;通过设计新的变量,在保证各agent的性能指标和控制律等价的同时,将原模型转化为无延迟的模型。然后,将转换后的模型与强化学习(RL)理论相结合,得到了一种新的数据驱动分布式学习框架。该框架能够在线学习最优控制律,并保证所有follower的位置和速度状态的跟踪一致性。与传统的行为者-评论框架相比,该框架利用额外的神经网络(NN)来近似前向奖励信息(FRI),以提高大众的信息学习能力。此外,利用李雅普诺夫理论对系统状态和三种神经网络结构进行了收敛性分析。最后,通过数值模拟对比实验,验证了该框架比经典的actor - critical框架具有更好的收敛性和更少的迭代步骤。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Systems Man Cybernetics-Systems
IEEE Transactions on Systems Man Cybernetics-Systems AUTOMATION & CONTROL SYSTEMS-COMPUTER SCIENCE, CYBERNETICS
CiteScore
18.50
自引率
11.50%
发文量
812
审稿时长
6 months
期刊介绍: The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.
期刊最新文献
Introducing IEEE Collabratec Introducing IEEE Collabratec TechRxiv: Share Your Preprint Research With the World! IEEE Systems, Man, and Cybernetics Society Information IEEE Systems, Man, and Cybernetics Society Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1