{"title":"Zeroth-Order Learning in Continuous Games via Residual Pseudogradient Estimates","authors":"Yuanhanqing Huang;Jianghai Hu","doi":"10.1109/TAC.2024.3479874","DOIUrl":null,"url":null,"abstract":"A variety of practical problems can be modeled by the decision-making process in multiplayer games where a group of self-interested players aim at optimizing their own local objectives, while the objectives depend on the actions taken by others. The local gradient information of each player, essential in implementing algorithms for finding game solutions, is all too often unavailable. In this article, we focus on designing solution algorithms for multiplayer games using bandit feedback, i.e., the only available feedback at each player's disposal is the realized objective values. To tackle the issue of large variances in the existing bandit learning algorithms with a single oracle call, we propose two algorithms by integrating the residual feedback scheme into single-call extragradient methods. Subsequently, we show that the actual sequences of play can converge almost surely to a critical point if the game is pseudomonotone plus and characterize the convergence rate to the critical point when the game is strongly pseudomonotone. The ergodic convergence rates of the generated sequences in monotone games are also investigated as a supplement. Finally, the validity of the proposed algorithms is further verified via numerical examples.","PeriodicalId":13201,"journal":{"name":"IEEE Transactions on Automatic Control","volume":"70 4","pages":"2258-2273"},"PeriodicalIF":7.0000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automatic Control","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10715648/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
A variety of practical problems can be modeled by the decision-making process in multiplayer games where a group of self-interested players aim at optimizing their own local objectives, while the objectives depend on the actions taken by others. The local gradient information of each player, essential in implementing algorithms for finding game solutions, is all too often unavailable. In this article, we focus on designing solution algorithms for multiplayer games using bandit feedback, i.e., the only available feedback at each player's disposal is the realized objective values. To tackle the issue of large variances in the existing bandit learning algorithms with a single oracle call, we propose two algorithms by integrating the residual feedback scheme into single-call extragradient methods. Subsequently, we show that the actual sequences of play can converge almost surely to a critical point if the game is pseudomonotone plus and characterize the convergence rate to the critical point when the game is strongly pseudomonotone. The ergodic convergence rates of the generated sequences in monotone games are also investigated as a supplement. Finally, the validity of the proposed algorithms is further verified via numerical examples.
期刊介绍:
In the IEEE Transactions on Automatic Control, the IEEE Control Systems Society publishes high-quality papers on the theory, design, and applications of control engineering. Two types of contributions are regularly considered:
1) Papers: Presentation of significant research, development, or application of control concepts.
2) Technical Notes and Correspondence: Brief technical notes, comments on published areas or established control topics, corrections to papers and notes published in the Transactions.
In addition, special papers (tutorials, surveys, and perspectives on the theory and applications of control systems topics) are solicited.