Huawen Hu, Enze Shi, Chenxi Yue, Shuocun Yang, Zihao Wu, Yiwei Li, Tianyang Zhong, Tuo Zhang, Tianming Liu, Shu Zhang
{"title":"HARP: Human-Assisted Regrouping with Permutation Invariant Critic for Multi-Agent Reinforcement Learning","authors":"Huawen Hu, Enze Shi, Chenxi Yue, Shuocun Yang, Zihao Wu, Yiwei Li, Tianyang Zhong, Tuo Zhang, Tianming Liu, Shu Zhang","doi":"arxiv-2409.11741","DOIUrl":null,"url":null,"abstract":"Human-in-the-loop reinforcement learning integrates human expertise to\naccelerate agent learning and provide critical guidance and feedback in complex\nfields. However, many existing approaches focus on single-agent tasks and\nrequire continuous human involvement during the training process, significantly\nincreasing the human workload and limiting scalability. In this paper, we\npropose HARP (Human-Assisted Regrouping with Permutation Invariant Critic), a\nmulti-agent reinforcement learning framework designed for group-oriented tasks.\nHARP integrates automatic agent regrouping with strategic human assistance\nduring deployment, enabling and allowing non-experts to offer effective\nguidance with minimal intervention. During training, agents dynamically adjust\ntheir groupings to optimize collaborative task completion. When deployed, they\nactively seek human assistance and utilize the Permutation Invariant Group\nCritic to evaluate and refine human-proposed groupings, allowing non-expert\nusers to contribute valuable suggestions. In multiple collaboration scenarios,\nour approach is able to leverage limited guidance from non-experts and enhance\nperformance. The project can be found at https://github.com/huawen-hu/HARP.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"20 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11741","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human-in-the-loop reinforcement learning integrates human expertise to
accelerate agent learning and provide critical guidance and feedback in complex
fields. However, many existing approaches focus on single-agent tasks and
require continuous human involvement during the training process, significantly
increasing the human workload and limiting scalability. In this paper, we
propose HARP (Human-Assisted Regrouping with Permutation Invariant Critic), a
multi-agent reinforcement learning framework designed for group-oriented tasks.
HARP integrates automatic agent regrouping with strategic human assistance
during deployment, enabling and allowing non-experts to offer effective
guidance with minimal intervention. During training, agents dynamically adjust
their groupings to optimize collaborative task completion. When deployed, they
actively seek human assistance and utilize the Permutation Invariant Group
Critic to evaluate and refine human-proposed groupings, allowing non-expert
users to contribute valuable suggestions. In multiple collaboration scenarios,
our approach is able to leverage limited guidance from non-experts and enhance
performance. The project can be found at https://github.com/huawen-hu/HARP.