Mai-Kao Lu , Ming-Feng Ge , Zhi-Chen Yan , Teng-Fei Ding , Zhi-Wei Liu
{"title":"通过强化学习实现多代理系统协同控制的综合决策-执行框架","authors":"Mai-Kao Lu , Ming-Feng Ge , Zhi-Chen Yan , Teng-Fei Ding , Zhi-Wei Liu","doi":"10.1016/j.sysconle.2024.105949","DOIUrl":null,"url":null,"abstract":"<div><div>Cooperative control is both a crucial and hot research topic for multi-agent systems (MASs). However, most existing cooperative control strategies guarantee tracking stability under various non-ideal conditions, while the path decision capability is often ignored. In this paper, the integrated decision-execution (IDE) framework is newly presented for cooperative control of multi-agent systems (MASs) to accomplish the integrated task of path decision and cooperative execution. This framework includes a decision layer and a control layer. The decision layer generates a continuous trajectory for the virtual leader to reach the target from its initial position in an unknown environment. To achieve the goal of this layer, (1) the Step-based Adaptive Search Q-learning (SASQ-learning) algorithm is proposed based on reinforcement learning to efficiently find the discrete path, (2) an Axis-based Trajectory Fitting (ATF) method is developed to convert the discrete path into a continuous trajectory for mobile agents. In the control layer, this trajectory is used to regulate the following MASs to achieve cooperative tracking control with the presence of input saturation. Simulation experiments are presented to demonstrate the effectiveness of this framework.</div></div>","PeriodicalId":49450,"journal":{"name":"Systems & Control Letters","volume":"193 ","pages":"Article 105949"},"PeriodicalIF":2.1000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An integrated decision-execution framework of cooperative control for multi-agent systems via reinforcement learning\",\"authors\":\"Mai-Kao Lu , Ming-Feng Ge , Zhi-Chen Yan , Teng-Fei Ding , Zhi-Wei Liu\",\"doi\":\"10.1016/j.sysconle.2024.105949\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Cooperative control is both a crucial and hot research topic for multi-agent systems (MASs). However, most existing cooperative control strategies guarantee tracking stability under various non-ideal conditions, while the path decision capability is often ignored. In this paper, the integrated decision-execution (IDE) framework is newly presented for cooperative control of multi-agent systems (MASs) to accomplish the integrated task of path decision and cooperative execution. This framework includes a decision layer and a control layer. The decision layer generates a continuous trajectory for the virtual leader to reach the target from its initial position in an unknown environment. To achieve the goal of this layer, (1) the Step-based Adaptive Search Q-learning (SASQ-learning) algorithm is proposed based on reinforcement learning to efficiently find the discrete path, (2) an Axis-based Trajectory Fitting (ATF) method is developed to convert the discrete path into a continuous trajectory for mobile agents. In the control layer, this trajectory is used to regulate the following MASs to achieve cooperative tracking control with the presence of input saturation. Simulation experiments are presented to demonstrate the effectiveness of this framework.</div></div>\",\"PeriodicalId\":49450,\"journal\":{\"name\":\"Systems & Control Letters\",\"volume\":\"193 \",\"pages\":\"Article 105949\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Systems & Control Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167691124002378\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems & Control Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167691124002378","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
An integrated decision-execution framework of cooperative control for multi-agent systems via reinforcement learning
Cooperative control is both a crucial and hot research topic for multi-agent systems (MASs). However, most existing cooperative control strategies guarantee tracking stability under various non-ideal conditions, while the path decision capability is often ignored. In this paper, the integrated decision-execution (IDE) framework is newly presented for cooperative control of multi-agent systems (MASs) to accomplish the integrated task of path decision and cooperative execution. This framework includes a decision layer and a control layer. The decision layer generates a continuous trajectory for the virtual leader to reach the target from its initial position in an unknown environment. To achieve the goal of this layer, (1) the Step-based Adaptive Search Q-learning (SASQ-learning) algorithm is proposed based on reinforcement learning to efficiently find the discrete path, (2) an Axis-based Trajectory Fitting (ATF) method is developed to convert the discrete path into a continuous trajectory for mobile agents. In the control layer, this trajectory is used to regulate the following MASs to achieve cooperative tracking control with the presence of input saturation. Simulation experiments are presented to demonstrate the effectiveness of this framework.
期刊介绍:
Founded in 1981 by two of the pre-eminent control theorists, Roger Brockett and Jan Willems, Systems & Control Letters is one of the leading journals in the field of control theory. The aim of the journal is to allow dissemination of relatively concise but highly original contributions whose high initial quality enables a relatively rapid review process. All aspects of the fields of systems and control are covered, especially mathematically-oriented and theoretical papers that have a clear relevance to engineering, physical and biological sciences, and even economics. Application-oriented papers with sophisticated and rigorous mathematical elements are also welcome.