{"title":"Multi-agent reinforcement learning system framework based on topological networks in Fourier space","authors":"Licheng Sun, Ao Ding, Hongbin Ma","doi":"10.1016/j.asoc.2025.112986","DOIUrl":null,"url":null,"abstract":"<div><div>Currently, multi-agent reinforcement learning (MARL) has been applied to various domains such as communications, network management, power systems, and autonomous driving, showcasing broad application scenarios and significant research potential. However, in complex decision-making environments, agents that rely solely on temporal value functions often struggle to capture and extract hidden features and dependencies within long sequences in multi-agent settings. Each agent’s decisions are influenced by a sequence of prior states and actions, leading to complex spatiotemporal dependencies that are challenging to analyze directly in the time domain. Addressing these challenges requires a paradigm shift to analyze such dependencies from a novel perspective. To this end, we propose a Multi-Agent Reinforcement Learning system framework based on Fourier Topological Space from the foundational level. This method involves transforming each agent’s value function into the frequency domain for analysis. Additionally, we design a lightweight weight calculation method based on historical topological relationships in the Fourier topological space. This addresses issues of instability and poor reproducibility in attention weights, along with various other interpretability challenges. The effectiveness of this method is validated through experiments in complex environments such as the StarCraft Multi-Agent Challenge (SMAC) and Google Football. Furthermore, in the Non-monotonic Matrix Game, our method successfully overcame the limitations of non-monotonicity, further proving its wide applicability and superiority. On the application level, the proposed algorithm is also applicable to various multi-agent system domains, such as robotics and factory robotic arm control. The algorithm can control each joint in a coordinated manner to accomplish tasks such as enabling a robot to stand upright or controlling the movements of robotic arms.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"174 ","pages":"Article 112986"},"PeriodicalIF":7.2000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625002972","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Currently, multi-agent reinforcement learning (MARL) has been applied to various domains such as communications, network management, power systems, and autonomous driving, showcasing broad application scenarios and significant research potential. However, in complex decision-making environments, agents that rely solely on temporal value functions often struggle to capture and extract hidden features and dependencies within long sequences in multi-agent settings. Each agent’s decisions are influenced by a sequence of prior states and actions, leading to complex spatiotemporal dependencies that are challenging to analyze directly in the time domain. Addressing these challenges requires a paradigm shift to analyze such dependencies from a novel perspective. To this end, we propose a Multi-Agent Reinforcement Learning system framework based on Fourier Topological Space from the foundational level. This method involves transforming each agent’s value function into the frequency domain for analysis. Additionally, we design a lightweight weight calculation method based on historical topological relationships in the Fourier topological space. This addresses issues of instability and poor reproducibility in attention weights, along with various other interpretability challenges. The effectiveness of this method is validated through experiments in complex environments such as the StarCraft Multi-Agent Challenge (SMAC) and Google Football. Furthermore, in the Non-monotonic Matrix Game, our method successfully overcame the limitations of non-monotonicity, further proving its wide applicability and superiority. On the application level, the proposed algorithm is also applicable to various multi-agent system domains, such as robotics and factory robotic arm control. The algorithm can control each joint in a coordinated manner to accomplish tasks such as enabling a robot to stand upright or controlling the movements of robotic arms.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.