{"title":"在具有独立链的[数学]玩家随机博弈中学习静态纳什均衡政策","authors":"S. Rasoul Etesami","doi":"10.1137/22m1512880","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 799-825, April 2024. <br/> Abstract. We consider a subclass of [math]-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each others’ states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is intractable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido–Isoda distance to the set of [math]-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an [math]-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning [math]-NE policies using numerical experiments for energy management in smart grids.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Stationary Nash Equilibrium Policies in [math]-Player Stochastic Games with Independent Chains\",\"authors\":\"S. Rasoul Etesami\",\"doi\":\"10.1137/22m1512880\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 799-825, April 2024. <br/> Abstract. We consider a subclass of [math]-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each others’ states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is intractable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido–Isoda distance to the set of [math]-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an [math]-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning [math]-NE policies using numerical experiments for energy management in smart grids.\",\"PeriodicalId\":49531,\"journal\":{\"name\":\"SIAM Journal on Control and Optimization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Journal on Control and Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/22m1512880\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Control and Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1512880","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Learning Stationary Nash Equilibrium Policies in [math]-Player Stochastic Games with Independent Chains
SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 799-825, April 2024. Abstract. We consider a subclass of [math]-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each others’ states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is intractable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido–Isoda distance to the set of [math]-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an [math]-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning [math]-NE policies using numerical experiments for energy management in smart grids.
期刊介绍:
SIAM Journal on Control and Optimization (SICON) publishes original research articles on the mathematics and applications of control theory and certain parts of optimization theory. Papers considered for publication must be significant at both the mathematical level and the level of applications or potential applications. Papers containing mostly routine mathematics or those with no discernible connection to control and systems theory or optimization will not be considered for publication. From time to time, the journal will also publish authoritative surveys of important subject areas in control theory and optimization whose level of maturity permits a clear and unified exposition.
The broad areas mentioned above are intended to encompass a wide range of mathematical techniques and scientific, engineering, economic, and industrial applications. These include stochastic and deterministic methods in control, estimation, and identification of systems; modeling and realization of complex control systems; the numerical analysis and related computational methodology of control processes and allied issues; and the development of mathematical theories and techniques that give new insights into old problems or provide the basis for further progress in control theory and optimization. Within the field of optimization, the journal focuses on the parts that are relevant to dynamic and control systems. Contributions to numerical methodology are also welcome in accordance with these aims, especially as related to large-scale problems and decomposition as well as to fundamental questions of convergence and approximation.