Yuhang Pei , Tao Ren , Yuxiang Zhang , Zhipeng Sun , Matys Champeyrol
{"title":"Policy distillation for efficient decentralized execution in multi-agent reinforcement learning","authors":"Yuhang Pei , Tao Ren , Yuxiang Zhang , Zhipeng Sun , Matys Champeyrol","doi":"10.1016/j.neucom.2025.129617","DOIUrl":null,"url":null,"abstract":"<div><div>Cooperative Multi-Agent Reinforcement Learning (MARL) addresses complex scenarios where multiple agents collaborate to achieve shared objectives. Training these agents in partially observable environments within the Centralized Training with Decentralized Execution (CTDE) framework remains challenging due to limited information access and the need for lightweight agent networks. To overcome these challenges, we introduce the Centralized Training and Policy Distillation for Decentralized Execution (CTPDE) framework. We propose a centralized dual-attention agent network that integrates global state and local observations to enable lossless value decomposition and prevent homogeneous agent behaviors. In addition, an efficient policy distillation method is proposed, in which a network of action value distribution is distilled from the centralized agent network, ensuring the efficiency of decentralized execution. The evaluation of CTPDE in benchmark environments demonstrates that the attention-based network achieves state-of-the-art performance during training. Moreover, the distilled agent network surpasses existing RNN-based methods and, in some cases, matches the capabilities of more complex architectures. These findings underscore the potential of CTPDE for advancing cooperative MARL tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129617"},"PeriodicalIF":5.5000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225002899","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Cooperative Multi-Agent Reinforcement Learning (MARL) addresses complex scenarios where multiple agents collaborate to achieve shared objectives. Training these agents in partially observable environments within the Centralized Training with Decentralized Execution (CTDE) framework remains challenging due to limited information access and the need for lightweight agent networks. To overcome these challenges, we introduce the Centralized Training and Policy Distillation for Decentralized Execution (CTPDE) framework. We propose a centralized dual-attention agent network that integrates global state and local observations to enable lossless value decomposition and prevent homogeneous agent behaviors. In addition, an efficient policy distillation method is proposed, in which a network of action value distribution is distilled from the centralized agent network, ensuring the efficiency of decentralized execution. The evaluation of CTPDE in benchmark environments demonstrates that the attention-based network achieves state-of-the-art performance during training. Moreover, the distilled agent network surpasses existing RNN-based methods and, in some cases, matches the capabilities of more complex architectures. These findings underscore the potential of CTPDE for advancing cooperative MARL tasks.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.