{"title":"Accelerating Nash Q-Learning with Graphical Game Representation and Equilibrium Solving","authors":"Yunkai Zhuang, Xingguo Chen, Yang Gao, Yujing Hu","doi":"10.1109/ICTAI.2019.00133","DOIUrl":null,"url":null,"abstract":"Traditional Nash Q-learning algorithm generally accepts a fact that agents are tightly coupled, which brings huge computing burden. However, many multi-agent systems in the real world have sparse interactions between agents. In this paper, sparse interactions are divided into two categories: intra-group sparse interactions and inter-group sparse interactions. Previous methods can only deal with one specific type of sparse interactions. Aiming at characterizing the two categories of sparse interactions, we use a novel mathematical model called Markov graphical game. On this basis, graphical game-based Nash Q-learning is proposed to deal with different types of interactions. Experimental results show that our algorithm takes less time per episode and acquires a good policy.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"2009 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2019.00133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Traditional Nash Q-learning algorithm generally accepts a fact that agents are tightly coupled, which brings huge computing burden. However, many multi-agent systems in the real world have sparse interactions between agents. In this paper, sparse interactions are divided into two categories: intra-group sparse interactions and inter-group sparse interactions. Previous methods can only deal with one specific type of sparse interactions. Aiming at characterizing the two categories of sparse interactions, we use a novel mathematical model called Markov graphical game. On this basis, graphical game-based Nash Q-learning is proposed to deal with different types of interactions. Experimental results show that our algorithm takes less time per episode and acquires a good policy.