{"title":"Beyond Local Views: Global State Inference with Diffusion Models for Cooperative Multi-Agent Reinforcement Learning","authors":"Zhiwei Xu, Hangyu Mao, Nianmin Zhang, Xin Xin, Pengjie Ren, Dapeng Li, Bin Zhang, Guoliang Fan, Zhumin Chen, Changwei Wang, Jiangjin Yin","doi":"arxiv-2408.09501","DOIUrl":null,"url":null,"abstract":"In partially observable multi-agent systems, agents typically only have\naccess to local observations. This severely hinders their ability to make\nprecise decisions, particularly during decentralized execution. To alleviate\nthis problem and inspired by image outpainting, we propose State Inference with\nDiffusion Models (SIDIFF), which uses diffusion models to reconstruct the\noriginal global state based solely on local observations. SIDIFF consists of a\nstate generator and a state extractor, which allow agents to choose suitable\nactions by considering both the reconstructed global state and local\nobservations. In addition, SIDIFF can be effortlessly incorporated into current\nmulti-agent reinforcement learning algorithms to improve their performance.\nFinally, we evaluated SIDIFF on different experimental platforms, including\nMulti-Agent Battle City (MABC), a novel and flexible multi-agent reinforcement\nlearning environment we developed. SIDIFF achieved desirable results and\noutperformed other popular algorithms.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"113 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In partially observable multi-agent systems, agents typically only have
access to local observations. This severely hinders their ability to make
precise decisions, particularly during decentralized execution. To alleviate
this problem and inspired by image outpainting, we propose State Inference with
Diffusion Models (SIDIFF), which uses diffusion models to reconstruct the
original global state based solely on local observations. SIDIFF consists of a
state generator and a state extractor, which allow agents to choose suitable
actions by considering both the reconstructed global state and local
observations. In addition, SIDIFF can be effortlessly incorporated into current
multi-agent reinforcement learning algorithms to improve their performance.
Finally, we evaluated SIDIFF on different experimental platforms, including
Multi-Agent Battle City (MABC), a novel and flexible multi-agent reinforcement
learning environment we developed. SIDIFF achieved desirable results and
outperformed other popular algorithms.