{"title":"Evolutionary game dynamics of multi-agent cooperation driven by self-learning","authors":"Jinming Du, Bin Wu, Long Wang","doi":"10.1109/ASCC.2013.6606032","DOIUrl":null,"url":null,"abstract":"Multi-agent cooperation problem is a fundamental issue in the coordination control field. Individuals achieve a common task through association with others or division of labor. Evolutionary game dynamics offers a basic framework to investigate how agents self-adaptively switch their strategies in accordance with various targets, and also the evolution of their behaviors. In this paper, we analytically study the strategy evolution in a multiple player game model driven by self-learning. Self-learning dynamics is of importance for agent strategy updating yet seldom analytically addressed before. It is based on self-evaluation, which applies to distributed control. We focus on the abundance of different strategies (behaviors of agents) and their oscillation (frequency of behavior switching). We arrive at the condition under which a strategy is more abundant over the other under weak selection limit. Such condition holds for any finite population size of N ≥ 3, thus it fits for the systems with finite agents, which has notable advantage over that of pairwise comparison process. At certain states of evolutionary stable state, there exists “ping-pong effect” with stable frequency, which is not affected by aspirations. Our results indicate that self-learning dynamics of multi-player games has special characters. Compared with pairwise comparison dynamics and Moran process, it shows different effect on strategy evolution, such as promoting cooperation in collective risk games with large threshold.","PeriodicalId":6304,"journal":{"name":"2013 9th Asian Control Conference (ASCC)","volume":"166 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 9th Asian Control Conference (ASCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASCC.2013.6606032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Multi-agent cooperation problem is a fundamental issue in the coordination control field. Individuals achieve a common task through association with others or division of labor. Evolutionary game dynamics offers a basic framework to investigate how agents self-adaptively switch their strategies in accordance with various targets, and also the evolution of their behaviors. In this paper, we analytically study the strategy evolution in a multiple player game model driven by self-learning. Self-learning dynamics is of importance for agent strategy updating yet seldom analytically addressed before. It is based on self-evaluation, which applies to distributed control. We focus on the abundance of different strategies (behaviors of agents) and their oscillation (frequency of behavior switching). We arrive at the condition under which a strategy is more abundant over the other under weak selection limit. Such condition holds for any finite population size of N ≥ 3, thus it fits for the systems with finite agents, which has notable advantage over that of pairwise comparison process. At certain states of evolutionary stable state, there exists “ping-pong effect” with stable frequency, which is not affected by aspirations. Our results indicate that self-learning dynamics of multi-player games has special characters. Compared with pairwise comparison dynamics and Moran process, it shows different effect on strategy evolution, such as promoting cooperation in collective risk games with large threshold.