{"title":"A unified distributed algorithm for non-cooperative games","authors":"J. Pang, Meisam Razaviyayn","doi":"10.1017/CBO9781316162750.005","DOIUrl":null,"url":null,"abstract":"This chapter presents a unified framework for the design and analysis of distributed algorithms for computing first-order stationary solutions of non-cooperative games with non-differentiable player objective functions. These games are closely associated with multi-agent optimization wherein a large number of selfish players compete noncooperatively to optimize their individual objectives under various constraints. Unlike centralized algorithms that require a certain system mechanism to coordinate the players’ actions, distributed algorithms have the advantage that the players, either individually or in subgroups, can each make their best responses without full information of their rivals’ actions. These distributed algorithms by nature are particularly suited for solving hugesize games where the large number of players in the game makes the coordination of the players almost impossible. The distributed algorithms are distinguished by several features: parallel versus sequential implementations, scheduled versus randomized player selections, synchronized versus asynchronous transfer of information, and individual versus multiple player updates. Covering many variations of distributed algorithms, the unified algorithm employs convex surrogate functions to handle nonsmooth nonconvex functions and a (possibly multi-valued) choice function to dictate the players’ turns to update their strategies. There are two general approaches to establish the convergence of such algorithms: contraction versus potential based, each requiring different properties of the players’ objective functions. We present the details of the convergence analysis based on these two approaches and discuss randomized extensions of the algorithms that require less coordination and hence are more suitable for big data problems. Introduction Introduced by John von Neumann [1], modern-day game theory has developed into a very fruitful research discipline with applications in many fields. There are two major classifications of a game, cooperative versus non-cooperative. This chapter pertains to one aspect of non-cooperative games for potential applications to big data, namely, the computation of a “solution” to such a game by a distributed algorithm. In a (basic) non-cooperative game, there are finitely many selfish players/agents each optimizing a rival-dependent objective by choosing feasible strategies satisfying certain private constraints. Providing a solution concept to such a game, a Nash equilibrium (NE) [2, 3] is by definition a tuple of strategies, one for each player, such that no player will be better off by unilaterally deviating from his/her equilibrium strategy while the rivals keep executing their equilibrium strategies.","PeriodicalId":415319,"journal":{"name":"Big Data over Networks","volume":"304 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data over Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/CBO9781316162750.005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
This chapter presents a unified framework for the design and analysis of distributed algorithms for computing first-order stationary solutions of non-cooperative games with non-differentiable player objective functions. These games are closely associated with multi-agent optimization wherein a large number of selfish players compete noncooperatively to optimize their individual objectives under various constraints. Unlike centralized algorithms that require a certain system mechanism to coordinate the players’ actions, distributed algorithms have the advantage that the players, either individually or in subgroups, can each make their best responses without full information of their rivals’ actions. These distributed algorithms by nature are particularly suited for solving hugesize games where the large number of players in the game makes the coordination of the players almost impossible. The distributed algorithms are distinguished by several features: parallel versus sequential implementations, scheduled versus randomized player selections, synchronized versus asynchronous transfer of information, and individual versus multiple player updates. Covering many variations of distributed algorithms, the unified algorithm employs convex surrogate functions to handle nonsmooth nonconvex functions and a (possibly multi-valued) choice function to dictate the players’ turns to update their strategies. There are two general approaches to establish the convergence of such algorithms: contraction versus potential based, each requiring different properties of the players’ objective functions. We present the details of the convergence analysis based on these two approaches and discuss randomized extensions of the algorithms that require less coordination and hence are more suitable for big data problems. Introduction Introduced by John von Neumann [1], modern-day game theory has developed into a very fruitful research discipline with applications in many fields. There are two major classifications of a game, cooperative versus non-cooperative. This chapter pertains to one aspect of non-cooperative games for potential applications to big data, namely, the computation of a “solution” to such a game by a distributed algorithm. In a (basic) non-cooperative game, there are finitely many selfish players/agents each optimizing a rival-dependent objective by choosing feasible strategies satisfying certain private constraints. Providing a solution concept to such a game, a Nash equilibrium (NE) [2, 3] is by definition a tuple of strategies, one for each player, such that no player will be better off by unilaterally deviating from his/her equilibrium strategy while the rivals keep executing their equilibrium strategies.
本章提出了一个统一的框架,用于计算具有不可微玩家目标函数的非合作博弈的一阶平稳解的分布式算法的设计和分析。这些博弈与多智能体优化密切相关,其中大量自私的参与者在各种约束下不合作竞争以优化他们的个人目标。集中式算法需要一定的系统机制来协调参与者的行动,而分布式算法的优势在于,参与者,无论是单独的还是子群体的,都可以在不完全了解对手行动的情况下做出最佳反应。这些分布式算法天生就特别适合解决大型游戏,因为游戏中有大量玩家,玩家之间的协调几乎是不可能的。分布式算法有几个特点:并行与顺序实现,计划与随机玩家选择,同步与异步信息传输,单个与多个玩家更新。统一算法涵盖了许多分布式算法的变体,使用凸代理函数来处理非光滑的非凸函数和(可能是多值的)选择函数来指示玩家更新策略的回合数。有两种方法可以建立这种算法的收敛性:收缩和基于潜力,每种方法都需要玩家目标函数的不同属性。我们介绍了基于这两种方法的收敛分析的细节,并讨论了算法的随机扩展,这些算法需要较少的协调,因此更适合于大数据问题。现代博弈论由冯·诺伊曼(John von Neumann)提出[1],已经发展成为一门非常富有成果的研究学科,在许多领域都有应用。游戏有两种主要类型,合作与非合作。本章涉及非合作博弈在大数据应用中的一个方面,即通过分布式算法计算这种博弈的“解决方案”。在一个(基本的)非合作博弈中,有有限多个自私的参与者/代理通过选择满足某些私有约束的可行策略来优化依赖于竞争对手的目标。为这种博弈提供一个解决方案的概念,纳什均衡(NE)[2,3]根据定义是一组策略,每个参与者一个,这样,当竞争对手继续执行他们的均衡策略时,任何参与者都不会单方面偏离自己的均衡策略而获得更好的结果。