Learning Stationary Nash Equilibrium Policies in [math]-Player Stochastic Games with Independent Chains

IF 2.2 2区 数学 Q2 AUTOMATION & CONTROL SYSTEMS SIAM Journal on Control and Optimization Pub Date : 2024-03-01 DOI:10.1137/22m1512880
S. Rasoul Etesami
{"title":"Learning Stationary Nash Equilibrium Policies in [math]-Player Stochastic Games with Independent Chains","authors":"S. Rasoul Etesami","doi":"10.1137/22m1512880","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 799-825, April 2024. <br/> Abstract. We consider a subclass of [math]-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each others’ states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is intractable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido–Isoda distance to the set of [math]-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an [math]-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning [math]-NE policies using numerical experiments for energy management in smart grids.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Control and Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1512880","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 799-825, April 2024.
Abstract. We consider a subclass of [math]-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each others’ states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is intractable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido–Isoda distance to the set of [math]-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an [math]-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning [math]-NE policies using numerical experiments for energy management in smart grids.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在具有独立链的[数学]玩家随机博弈中学习静态纳什均衡政策
SIAM 控制与优化期刊》第 62 卷第 2 期第 799-825 页,2024 年 4 月。 摘要。我们考虑了[math]玩家随机博弈的一个子类,其中玩家有自己的内部状态/行动空间,同时他们通过报酬函数耦合在一起。假设博弈方的内部链由独立的过渡概率驱动。此外,博弈者只能获得其报酬的实现,而非实际函数,并且无法观察到对方的状态/行动。对于这类博弈,我们首先证明,在不假设报酬函数的情况下,寻找静态纳什均衡(NE)策略是难以实现的。然而,对于一般的奖励函数,我们开发了基于对偶平均和对偶镜像下降的多项式时间学习算法,这些算法在平均日海道-伊索达距离方面几乎肯定或期望收敛到[math]-NE 政策集。特别是,在奖励函数的额外假设(如社会凹性)下,我们推导出了高概率实现[math]-NE 策略的迭代次数的多项式上限。最后,我们利用智能电网能源管理的数值实验评估了所提算法在学习[math]-NE 政策方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.00
自引率
4.50%
发文量
143
审稿时长
12 months
期刊介绍: SIAM Journal on Control and Optimization (SICON) publishes original research articles on the mathematics and applications of control theory and certain parts of optimization theory. Papers considered for publication must be significant at both the mathematical level and the level of applications or potential applications. Papers containing mostly routine mathematics or those with no discernible connection to control and systems theory or optimization will not be considered for publication. From time to time, the journal will also publish authoritative surveys of important subject areas in control theory and optimization whose level of maturity permits a clear and unified exposition. The broad areas mentioned above are intended to encompass a wide range of mathematical techniques and scientific, engineering, economic, and industrial applications. These include stochastic and deterministic methods in control, estimation, and identification of systems; modeling and realization of complex control systems; the numerical analysis and related computational methodology of control processes and allied issues; and the development of mathematical theories and techniques that give new insights into old problems or provide the basis for further progress in control theory and optimization. Within the field of optimization, the journal focuses on the parts that are relevant to dynamic and control systems. Contributions to numerical methodology are also welcome in accordance with these aims, especially as related to large-scale problems and decomposition as well as to fundamental questions of convergence and approximation.
期刊最新文献
Local Exact Controllability of the One-Dimensional Nonlinear Schrödinger Equation in the Case of Dirichlet Boundary Conditions Backward Stochastic Differential Equations with Conditional Reflection and Related Recursive Optimal Control Problems Logarithmic Regret Bounds for Continuous-Time Average-Reward Markov Decision Processes Optimal Ratcheting of Dividend Payout Under Brownian Motion Surplus An Optimal Spectral Inequality for Degenerate Operators
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1