{"title":"Joint-perturbation simultaneous pseudo-gradient","authors":"Carlos Martin, Tuomas Sandholm","doi":"arxiv-2408.09306","DOIUrl":null,"url":null,"abstract":"We study the problem of computing an approximate Nash equilibrium of a game\nwhose strategy space is continuous without access to gradients of the utility\nfunction. Such games arise, for example, when players' strategies are\nrepresented by the parameters of a neural network. Lack of access to gradients\nis common in reinforcement learning settings, where the environment is treated\nas a black box, as well as equilibrium finding in mechanisms such as auctions,\nwhere the mechanism's payoffs are discontinuous in the players' actions. To\ntackle this problem, we turn to zeroth-order optimization techniques that\ncombine pseudo-gradients with equilibrium-finding dynamics. Specifically, we\nintroduce a new technique that requires a number of utility function\nevaluations per iteration that is constant rather than linear in the number of\nplayers. It achieves this by performing a single joint perturbation on all\nplayers' strategies, rather than perturbing each one individually. This yields\na dramatic improvement for many-player games, especially when the utility\nfunction is expensive to compute in terms of wall time, memory, money, or other\nresources. We evaluate our approach on various games, including auctions, which\nhave important real-world applications. Our approach yields a significant\nreduction in the run time required to reach an approximate Nash equilibrium.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09306","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We study the problem of computing an approximate Nash equilibrium of a game
whose strategy space is continuous without access to gradients of the utility
function. Such games arise, for example, when players' strategies are
represented by the parameters of a neural network. Lack of access to gradients
is common in reinforcement learning settings, where the environment is treated
as a black box, as well as equilibrium finding in mechanisms such as auctions,
where the mechanism's payoffs are discontinuous in the players' actions. To
tackle this problem, we turn to zeroth-order optimization techniques that
combine pseudo-gradients with equilibrium-finding dynamics. Specifically, we
introduce a new technique that requires a number of utility function
evaluations per iteration that is constant rather than linear in the number of
players. It achieves this by performing a single joint perturbation on all
players' strategies, rather than perturbing each one individually. This yields
a dramatic improvement for many-player games, especially when the utility
function is expensive to compute in terms of wall time, memory, money, or other
resources. We evaluate our approach on various games, including auctions, which
have important real-world applications. Our approach yields a significant
reduction in the run time required to reach an approximate Nash equilibrium.