Jinna Li;Lin Yuan;Weiran Cheng;Tianyou Chai;Frank L. Lewis
{"title":"通过改进的 Q$ 函数实现异构多代理系统同步的强化学习","authors":"Jinna Li;Lin Yuan;Weiran Cheng;Tianyou Chai;Frank L. Lewis","doi":"10.1109/TCYB.2024.3440333","DOIUrl":null,"url":null,"abstract":"This article dedicates to investigating a methodology for enhancing adaptability to environmental changes of reinforcement learning (RL) techniques with data efficiency, by which a joint control protocol is learned using only data for multiagent systems (MASs). Thus, all followers are able to synchronize themselves with the leader and minimize their individual performance. To this end, an optimal synchronization problem of heterogeneous MASs is first formulated, and then an arbitration RL mechanism is developed for well addressing key challenges faced by the current RL techniques, that is, insufficient data and environmental changes. In the developed mechanism, an improved Q-function with an arbitration factor is designed for accommodating the fact that control protocols tend to be made by historic experiences and instinctive decision-making, such that the degree of control over agents’ behaviors can be adaptively allocated by on-policy and off-policy RL techniques for the optimal multiagent synchronization problem. Finally, an arbitration RL algorithm with critic-only neural networks is proposed, and theoretical analysis and proofs of synchronization and performance optimality are provided. Simulation results verify the effectiveness of the proposed method.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"54 11","pages":"6545-6558"},"PeriodicalIF":9.4000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement Learning for Synchronization of Heterogeneous Multiagent Systems by Improved Q-Functions\",\"authors\":\"Jinna Li;Lin Yuan;Weiran Cheng;Tianyou Chai;Frank L. Lewis\",\"doi\":\"10.1109/TCYB.2024.3440333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article dedicates to investigating a methodology for enhancing adaptability to environmental changes of reinforcement learning (RL) techniques with data efficiency, by which a joint control protocol is learned using only data for multiagent systems (MASs). Thus, all followers are able to synchronize themselves with the leader and minimize their individual performance. To this end, an optimal synchronization problem of heterogeneous MASs is first formulated, and then an arbitration RL mechanism is developed for well addressing key challenges faced by the current RL techniques, that is, insufficient data and environmental changes. In the developed mechanism, an improved Q-function with an arbitration factor is designed for accommodating the fact that control protocols tend to be made by historic experiences and instinctive decision-making, such that the degree of control over agents’ behaviors can be adaptively allocated by on-policy and off-policy RL techniques for the optimal multiagent synchronization problem. Finally, an arbitration RL algorithm with critic-only neural networks is proposed, and theoretical analysis and proofs of synchronization and performance optimality are provided. Simulation results verify the effectiveness of the proposed method.\",\"PeriodicalId\":13112,\"journal\":{\"name\":\"IEEE Transactions on Cybernetics\",\"volume\":\"54 11\",\"pages\":\"6545-6558\"},\"PeriodicalIF\":9.4000,\"publicationDate\":\"2024-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cybernetics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10690164/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10690164/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Reinforcement Learning for Synchronization of Heterogeneous Multiagent Systems by Improved Q-Functions
This article dedicates to investigating a methodology for enhancing adaptability to environmental changes of reinforcement learning (RL) techniques with data efficiency, by which a joint control protocol is learned using only data for multiagent systems (MASs). Thus, all followers are able to synchronize themselves with the leader and minimize their individual performance. To this end, an optimal synchronization problem of heterogeneous MASs is first formulated, and then an arbitration RL mechanism is developed for well addressing key challenges faced by the current RL techniques, that is, insufficient data and environmental changes. In the developed mechanism, an improved Q-function with an arbitration factor is designed for accommodating the fact that control protocols tend to be made by historic experiences and instinctive decision-making, such that the degree of control over agents’ behaviors can be adaptively allocated by on-policy and off-policy RL techniques for the optimal multiagent synchronization problem. Finally, an arbitration RL algorithm with critic-only neural networks is proposed, and theoretical analysis and proofs of synchronization and performance optimality are provided. Simulation results verify the effectiveness of the proposed method.
期刊介绍:
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.