Claude Formanek, Louise Beyers, Callum Rhys Tilbury, Jonathan P. Shock, Arnu Pretorius
{"title":"Putting Data at the Centre of Offline Multi-Agent Reinforcement Learning","authors":"Claude Formanek, Louise Beyers, Callum Rhys Tilbury, Jonathan P. Shock, Arnu Pretorius","doi":"arxiv-2409.12001","DOIUrl":null,"url":null,"abstract":"Offline multi-agent reinforcement learning (MARL) is an exciting direction of\nresearch that uses static datasets to find optimal control policies for\nmulti-agent systems. Though the field is by definition data-driven, efforts\nhave thus far neglected data in their drive to achieve state-of-the-art\nresults. We first substantiate this claim by surveying the literature, showing\nhow the majority of works generate their own datasets without consistent\nmethodology and provide sparse information about the characteristics of these\ndatasets. We then show why neglecting the nature of the data is problematic,\nthrough salient examples of how tightly algorithmic performance is coupled to\nthe dataset used, necessitating a common foundation for experiments in the\nfield. In response, we take a big step towards improving data usage and data\nawareness in offline MARL, with three key contributions: (1) a clear guideline\nfor generating novel datasets; (2) a standardisation of over 80 existing\ndatasets, hosted in a publicly available repository, using a consistent storage\nformat and easy-to-use API; and (3) a suite of analysis tools that allow us to\nunderstand these datasets better, aiding further development.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"17 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Offline multi-agent reinforcement learning (MARL) is an exciting direction of
research that uses static datasets to find optimal control policies for
multi-agent systems. Though the field is by definition data-driven, efforts
have thus far neglected data in their drive to achieve state-of-the-art
results. We first substantiate this claim by surveying the literature, showing
how the majority of works generate their own datasets without consistent
methodology and provide sparse information about the characteristics of these
datasets. We then show why neglecting the nature of the data is problematic,
through salient examples of how tightly algorithmic performance is coupled to
the dataset used, necessitating a common foundation for experiments in the
field. In response, we take a big step towards improving data usage and data
awareness in offline MARL, with three key contributions: (1) a clear guideline
for generating novel datasets; (2) a standardisation of over 80 existing
datasets, hosted in a publicly available repository, using a consistent storage
format and easy-to-use API; and (3) a suite of analysis tools that allow us to
understand these datasets better, aiding further development.