Local Coordination in Multi-Agent Reinforcement Learning

Fanchao Xu, Tomoyuki Kaneko
{"title":"Local Coordination in Multi-Agent Reinforcement Learning","authors":"Fanchao Xu, Tomoyuki Kaneko","doi":"10.1109/taai54685.2021.00036","DOIUrl":null,"url":null,"abstract":"This paper studies cooperative multi-agent reinforcement learning problems where agents pursue a common goal through their cooperation. Because each agent needs to act individually on the basis on its local observation, the difficulty of learning depends on to what extent information can be exchanged among agents. We extend value-decomposition networks (VDN), a framework requiring the least communication, by allowing information exchange within a local group and present residual group VDN (RGV). We empirically show that the performance of RGV is better than VDN and other state-of-the-art methods in the predator-prey game. Also, on three tasks in the StarCraft Multi-Agent Challenge, RGV showed comparable performance with more sophisticated methods utilizing more information or communication. Therefore, our RGV is an alternative method worth further research.","PeriodicalId":343821,"journal":{"name":"2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/taai54685.2021.00036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper studies cooperative multi-agent reinforcement learning problems where agents pursue a common goal through their cooperation. Because each agent needs to act individually on the basis on its local observation, the difficulty of learning depends on to what extent information can be exchanged among agents. We extend value-decomposition networks (VDN), a framework requiring the least communication, by allowing information exchange within a local group and present residual group VDN (RGV). We empirically show that the performance of RGV is better than VDN and other state-of-the-art methods in the predator-prey game. Also, on three tasks in the StarCraft Multi-Agent Challenge, RGV showed comparable performance with more sophisticated methods utilizing more information or communication. Therefore, our RGV is an alternative method worth further research.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多智能体强化学习中的局部协调
本文研究了协作式多智能体强化学习问题,即智能体之间通过合作来追求共同的目标。因为每个智能体需要根据其局部观察单独行动,学习的难度取决于智能体之间信息交换的程度。我们扩展了价值分解网络(VDN),这是一个需要最少通信的框架,通过允许在本地组和剩余组VDN (RGV)内进行信息交换。我们的经验表明,在捕食者-猎物博弈中,RGV的性能优于VDN和其他最先进的方法。此外,在《星际争霸》Multi-Agent Challenge中的三个任务中,RGV使用了更复杂的方法并使用了更多的信息或交流。因此,我们的RGV是一种值得进一步研究的替代方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Using Random Forests and Decision Trees to Predict Viewing Game Live Streaming via Viewers’ Comments [Title page iii] An Automatic Response System based on Multi-layer Perceptual Neural Network and Web Crawler MLNN: A Novel Network Intrusion Detection Based on Multilayer Neural Network A Hybrid Deep Learning Network for Long-Term Travel Time Prediction in Freeways
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1