{"title":"通过事件触发状态反馈实现非线性异构 MAS 的基于 RL 的自适应最优两方共识控制","authors":"Yuhao Zhou;Biao Luo;Xin Wang;Xiaodong Xu;Lin Xiao","doi":"10.1109/TCSI.2024.3426982","DOIUrl":null,"url":null,"abstract":"This article investigates a leader-following bipartite consensus issue for uncertain nonlinear heterogeneous multiagent systems (MASs). Initially, within the framework of optimal control theory, we employ the reinforcement learning (RL) algorithm to derive an approximate solution to the Hamilton-Jacobi-Bellman equation (HJBE). Specifically, the neural networks (NNs) are utilized to construct the Actor-Critic structure with the aim of implementing control behavior and evaluating system performance, respectively. An additional network is employed to address nonlinear uncertainties existing in the system. Furthermore, we design a static threshold event-triggered mechanism (ETM) to achieve the event-triggered state feedback-based control strategy. By utilizing this event-triggered state information, we reconstruct the approximate optimal controller and update laws of neural network weights, effectively reducing the communication burden while ensuring that all signals of the MASs remain bounded. Finally, two simulation examples are carried out to demonstrate the feasibility of the proposed method.","PeriodicalId":13039,"journal":{"name":"IEEE Transactions on Circuits and Systems I: Regular Papers","volume":"71 9","pages":"4261-4273"},"PeriodicalIF":5.2000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RL-Based Adaptive Optimal Bipartite Consensus Control for Nonlinear Heterogeneous MASs via Event-Triggered State Feedback\",\"authors\":\"Yuhao Zhou;Biao Luo;Xin Wang;Xiaodong Xu;Lin Xiao\",\"doi\":\"10.1109/TCSI.2024.3426982\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article investigates a leader-following bipartite consensus issue for uncertain nonlinear heterogeneous multiagent systems (MASs). Initially, within the framework of optimal control theory, we employ the reinforcement learning (RL) algorithm to derive an approximate solution to the Hamilton-Jacobi-Bellman equation (HJBE). Specifically, the neural networks (NNs) are utilized to construct the Actor-Critic structure with the aim of implementing control behavior and evaluating system performance, respectively. An additional network is employed to address nonlinear uncertainties existing in the system. Furthermore, we design a static threshold event-triggered mechanism (ETM) to achieve the event-triggered state feedback-based control strategy. By utilizing this event-triggered state information, we reconstruct the approximate optimal controller and update laws of neural network weights, effectively reducing the communication burden while ensuring that all signals of the MASs remain bounded. Finally, two simulation examples are carried out to demonstrate the feasibility of the proposed method.\",\"PeriodicalId\":13039,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems I: Regular Papers\",\"volume\":\"71 9\",\"pages\":\"4261-4273\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems I: Regular Papers\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10633792/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems I: Regular Papers","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10633792/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
本文研究了不确定的非线性异构多代理系统(MAS)的领导者-跟随者两方共识问题。首先,在最优控制理论的框架内,我们采用强化学习(RL)算法推导出汉密尔顿-雅各比-贝尔曼方程(HJBE)的近似解。具体来说,我们利用神经网络(NN)来构建 "行动者-批判者 "结构,目的分别是实施控制行为和评估系统性能。我们还采用了一个额外的网络来解决系统中存在的非线性不确定性。此外,我们还设计了一种静态阈值事件触发机制(ETM),以实现基于事件触发状态反馈的控制策略。通过利用这些事件触发状态信息,我们重建了近似最优控制器和神经网络权重更新规律,在确保 MAS 所有信号保持有界的同时,有效减轻了通信负担。最后,我们通过两个仿真实例证明了所提方法的可行性。
RL-Based Adaptive Optimal Bipartite Consensus Control for Nonlinear Heterogeneous MASs via Event-Triggered State Feedback
This article investigates a leader-following bipartite consensus issue for uncertain nonlinear heterogeneous multiagent systems (MASs). Initially, within the framework of optimal control theory, we employ the reinforcement learning (RL) algorithm to derive an approximate solution to the Hamilton-Jacobi-Bellman equation (HJBE). Specifically, the neural networks (NNs) are utilized to construct the Actor-Critic structure with the aim of implementing control behavior and evaluating system performance, respectively. An additional network is employed to address nonlinear uncertainties existing in the system. Furthermore, we design a static threshold event-triggered mechanism (ETM) to achieve the event-triggered state feedback-based control strategy. By utilizing this event-triggered state information, we reconstruct the approximate optimal controller and update laws of neural network weights, effectively reducing the communication burden while ensuring that all signals of the MASs remain bounded. Finally, two simulation examples are carried out to demonstrate the feasibility of the proposed method.
期刊介绍:
TCAS I publishes regular papers in the field specified by the theory, analysis, design, and practical implementations of circuits, and the application of circuit techniques to systems and to signal processing. Included is the whole spectrum from basic scientific theory to industrial applications. The field of interest covered includes: - Circuits: Analog, Digital and Mixed Signal Circuits and Systems - Nonlinear Circuits and Systems, Integrated Sensors, MEMS and Systems on Chip, Nanoscale Circuits and Systems, Optoelectronic - Circuits and Systems, Power Electronics and Systems - Software for Analog-and-Logic Circuits and Systems - Control aspects of Circuits and Systems.