{"title":"Deep Multitask Multiagent Reinforcement Learning With Knowledge Transfer","authors":"Yuxiang Mai;Yifan Zang;Qiyue Yin;Wancheng Ni;Kaiqi Huang","doi":"10.1109/TG.2023.3316697","DOIUrl":null,"url":null,"abstract":"Despite the potential of multiagent reinforcement learning (MARL) in addressing numerous complex tasks, training a single team of MARL agents to handle multiple diverse team tasks remains a challenge. In this article, we introduce a novel Multitask method based on Knowledge Transfer in cooperative MARL (MKT-MARL). By learning from task-specific teachers, our approach empowers a single team of agents to attain expert-level performance in multiple tasks. MKT-MARL utilizes a knowledge distillation algorithm specifically designed for the multiagent architecture, which rapidly learns a team control policy incorporating common coordinated knowledge from the experience of task-specific teachers. In addition, we enhance this training with teacher annealing, gradually shifting the model's learning from distillation toward environmental rewards. This enhancement helps the multitask model surpass its single-task teachers. We extensively evaluate our algorithm using two commonly-used benchmarks: \n<italic>StarCraft II</i>\n micromanagement and multiagent particle environment. The experimental results demonstrate that our algorithm outperforms both the single-task teachers and a jointly trained team of agents. Extensive ablation experiments illustrate the effectiveness of the supervised knowledge transfer and the teacher annealing strategy.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"566-576"},"PeriodicalIF":1.7000,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Games","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10255234/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Despite the potential of multiagent reinforcement learning (MARL) in addressing numerous complex tasks, training a single team of MARL agents to handle multiple diverse team tasks remains a challenge. In this article, we introduce a novel Multitask method based on Knowledge Transfer in cooperative MARL (MKT-MARL). By learning from task-specific teachers, our approach empowers a single team of agents to attain expert-level performance in multiple tasks. MKT-MARL utilizes a knowledge distillation algorithm specifically designed for the multiagent architecture, which rapidly learns a team control policy incorporating common coordinated knowledge from the experience of task-specific teachers. In addition, we enhance this training with teacher annealing, gradually shifting the model's learning from distillation toward environmental rewards. This enhancement helps the multitask model surpass its single-task teachers. We extensively evaluate our algorithm using two commonly-used benchmarks:
StarCraft II
micromanagement and multiagent particle environment. The experimental results demonstrate that our algorithm outperforms both the single-task teachers and a jointly trained team of agents. Extensive ablation experiments illustrate the effectiveness of the supervised knowledge transfer and the teacher annealing strategy.