游戏中的快速收敛学习(特邀演讲)

Vasilis Syrgkanis
{"title":"游戏中的快速收敛学习(特邀演讲)","authors":"Vasilis Syrgkanis","doi":"10.1145/3055399.3084098","DOIUrl":null,"url":null,"abstract":"A plethora of recent work has analyzed properties of outcomes in games when each player employs a no-regret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/'T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player's opponents are minimizing their own regret, rather than maximizing the player's regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zero-sum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T3/4) rates are achievable in general multi-player games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"451 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast convergence of learning in games (invited talk)\",\"authors\":\"Vasilis Syrgkanis\",\"doi\":\"10.1145/3055399.3084098\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A plethora of recent work has analyzed properties of outcomes in games when each player employs a no-regret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/'T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player's opponents are minimizing their own regret, rather than maximizing the player's regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zero-sum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T3/4) rates are achievable in general multi-player games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire.\",\"PeriodicalId\":20615,\"journal\":{\"name\":\"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing\",\"volume\":\"451 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3055399.3084098\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3055399.3084098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近有大量研究分析了当每个玩家都使用无悔学习算法时游戏结果的属性。当游戏进行T次迭代时,许多算法在以0 (1/'T)的速率衰减的最佳固定动作中实现遗憾。后一种比率在对抗环境中是最佳的。然而,在游戏中,玩家的对手会最小化他们自己的遗憾,而不是最大化玩家的遗憾。(Daskalakis et al. 2014)和(Rakhlin and Sridharan 2013)表明,在两个玩家的零和博弈中,0 (1/T)比率是可以实现的。在(sygkanis et al. 2015)中,我们表明在一般的多人游戏中可以实现0 (1/T3/4)的速率,并且还分析了动态趋同的近似最优社会福利,其中我们显示了O(1/T)的趋同速率。后一种结果随后被推广到更广泛的学习算法类别(Foster et al. 2016)。这是基于与Alekh Agarwal, Haipeng Luo和Robert E. Schapire的合作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fast convergence of learning in games (invited talk)
A plethora of recent work has analyzed properties of outcomes in games when each player employs a no-regret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/'T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player's opponents are minimizing their own regret, rather than maximizing the player's regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zero-sum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T3/4) rates are achievable in general multi-player games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Online service with delay A simpler and faster strongly polynomial algorithm for generalized flow maximization Low rank approximation with entrywise l1-norm error Fast convergence of learning in games (invited talk) Surviving in directed graphs: a quasi-polynomial-time polylogarithmic approximation for two-connected directed Steiner tree
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1