{"title":"Bridging Training and Execution via Dynamic Directed Graph-Based Communication in Cooperative Multi-Agent Systems","authors":"Zhuohui Zhang, Bin He, Bin Cheng, Gang Li","doi":"arxiv-2408.07397","DOIUrl":null,"url":null,"abstract":"Multi-agent systems must learn to communicate and understand interactions\nbetween agents to achieve cooperative goals in partially observed tasks.\nHowever, existing approaches lack a dynamic directed communication mechanism\nand rely on global states, thus diminishing the role of communication in\ncentralized training. Thus, we propose the transformer-based graph coarsening\nnetwork (TGCNet), a novel multi-agent reinforcement learning (MARL) algorithm.\nTGCNet learns the topological structure of a dynamic directed graph to\nrepresent the communication policy and integrates graph coarsening networks to\napproximate the representation of global state during training. It also\nutilizes the transformer decoder for feature extraction during execution.\nExperiments on multiple cooperative MARL benchmarks demonstrate\nstate-of-the-art performance compared to popular MARL algorithms. Further\nablation studies validate the effectiveness of our dynamic directed graph\ncommunication mechanism and graph coarsening networks.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.07397","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-agent systems must learn to communicate and understand interactions
between agents to achieve cooperative goals in partially observed tasks.
However, existing approaches lack a dynamic directed communication mechanism
and rely on global states, thus diminishing the role of communication in
centralized training. Thus, we propose the transformer-based graph coarsening
network (TGCNet), a novel multi-agent reinforcement learning (MARL) algorithm.
TGCNet learns the topological structure of a dynamic directed graph to
represent the communication policy and integrates graph coarsening networks to
approximate the representation of global state during training. It also
utilizes the transformer decoder for feature extraction during execution.
Experiments on multiple cooperative MARL benchmarks demonstrate
state-of-the-art performance compared to popular MARL algorithms. Further
ablation studies validate the effectiveness of our dynamic directed graph
communication mechanism and graph coarsening networks.