ReLExS:针对 Stackelberg 无悔学习者的强化学习解释

Xiangge Huang, Jingyuan Li, Jiaqing Xie
{"title":"ReLExS:针对 Stackelberg 无悔学习者的强化学习解释","authors":"Xiangge Huang, Jingyuan Li, Jiaqing Xie","doi":"arxiv-2408.14086","DOIUrl":null,"url":null,"abstract":"With the constraint of a no regret follower, will the players in a two-player\nStackelberg game still reach Stackelberg equilibrium? We first show when the\nfollower strategy is either reward-average or transform-reward-average, the two\nplayers can always get the Stackelberg Equilibrium. Then, we extend that the\nplayers can achieve the Stackelberg equilibrium in the two-player game under\nthe no regret constraint. Also, we show a strict upper bound of the follower's\nutility difference between with and without no regret constraint. Moreover, in\nconstant-sum two-player Stackelberg games with non-regret action sequences, we\nensure the total optimal utility of the game remains also bounded.","PeriodicalId":501316,"journal":{"name":"arXiv - CS - Computer Science and Game Theory","volume":"204 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ReLExS: Reinforcement Learning Explanations for Stackelberg No-Regret Learners\",\"authors\":\"Xiangge Huang, Jingyuan Li, Jiaqing Xie\",\"doi\":\"arxiv-2408.14086\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the constraint of a no regret follower, will the players in a two-player\\nStackelberg game still reach Stackelberg equilibrium? We first show when the\\nfollower strategy is either reward-average or transform-reward-average, the two\\nplayers can always get the Stackelberg Equilibrium. Then, we extend that the\\nplayers can achieve the Stackelberg equilibrium in the two-player game under\\nthe no regret constraint. Also, we show a strict upper bound of the follower's\\nutility difference between with and without no regret constraint. Moreover, in\\nconstant-sum two-player Stackelberg games with non-regret action sequences, we\\nensure the total optimal utility of the game remains also bounded.\",\"PeriodicalId\":501316,\"journal\":{\"name\":\"arXiv - CS - Computer Science and Game Theory\",\"volume\":\"204 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Science and Game Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.14086\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Science and Game Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.14086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在无悔追随者的约束下,双人斯塔克尔伯格博弈中的博弈者还能达到斯塔克尔伯格均衡吗?我们首先证明,当追随者的策略是奖励平均策略或变换奖励平均策略时,双人博弈者总能达到斯塔克尔伯格均衡。然后,我们进一步证明,在无悔约束条件下,玩家可以在双人博弈中实现斯塔克尔伯格均衡。同时,我们还证明了有无悔约束条件下追随者效用差的严格上限。此外,在有无悔行动序列的不恒等和双人斯塔克尔伯格博弈中,我们确保博弈的总最优效用也是有界的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ReLExS: Reinforcement Learning Explanations for Stackelberg No-Regret Learners
With the constraint of a no regret follower, will the players in a two-player Stackelberg game still reach Stackelberg equilibrium? We first show when the follower strategy is either reward-average or transform-reward-average, the two players can always get the Stackelberg Equilibrium. Then, we extend that the players can achieve the Stackelberg equilibrium in the two-player game under the no regret constraint. Also, we show a strict upper bound of the follower's utility difference between with and without no regret constraint. Moreover, in constant-sum two-player Stackelberg games with non-regret action sequences, we ensure the total optimal utility of the game remains also bounded.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
MALADY: Multiclass Active Learning with Auction Dynamics on Graphs Mechanism Design for Extending the Accessibility of Facilities Common revenue allocation in DMUs with two stages based on DEA cross-efficiency and cooperative game The common revenue allocation based on modified Shapley value and DEA cross-efficiency On Robustness to $k$-wise Independence of Optimal Bayesian Mechanisms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1