{"title":"First-Order Algorithms for Nonlinear Generalized Nash Equilibrium Problems","authors":"Michael I. Jordan, Tianyi Lin, M. Zampetakis","doi":"10.48550/arXiv.2204.03132","DOIUrl":null,"url":null,"abstract":"We consider the problem of computing an equilibrium in a class of \\textit{nonlinear generalized Nash equilibrium problems (NGNEPs)} in which the strategy sets for each player are defined by equality and inequality constraints that may depend on the choices of rival players. While the asymptotic global convergence and local convergence rates of algorithms to solve this problem have been extensively investigated, the analysis of nonasymptotic iteration complexity is still in its infancy. This paper presents two first-order algorithms -- based on the quadratic penalty method (QPM) and augmented Lagrangian method (ALM), respectively -- with an accelerated mirror-prox algorithm as the solver in each inner loop. We establish a global convergence guarantee for solving monotone and strongly monotone NGNEPs and provide nonasymptotic complexity bounds expressed in terms of the number of gradient evaluations. Experimental results demonstrate the efficiency of our algorithms in practice.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"20 1","pages":"38:1-38:46"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Mach. Learn. Res.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2204.03132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
We consider the problem of computing an equilibrium in a class of \textit{nonlinear generalized Nash equilibrium problems (NGNEPs)} in which the strategy sets for each player are defined by equality and inequality constraints that may depend on the choices of rival players. While the asymptotic global convergence and local convergence rates of algorithms to solve this problem have been extensively investigated, the analysis of nonasymptotic iteration complexity is still in its infancy. This paper presents two first-order algorithms -- based on the quadratic penalty method (QPM) and augmented Lagrangian method (ALM), respectively -- with an accelerated mirror-prox algorithm as the solver in each inner loop. We establish a global convergence guarantee for solving monotone and strongly monotone NGNEPs and provide nonasymptotic complexity bounds expressed in terms of the number of gradient evaluations. Experimental results demonstrate the efficiency of our algorithms in practice.