{"title":"一种改进的高斯线性Bandit环境下汤普森采样的遗憾界","authors":"Cem Kalkanli, Ayfer Özgür","doi":"10.1109/ISIT44484.2020.9174371","DOIUrl":null,"url":null,"abstract":"Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by$O(\\sqrt {T\\log (T)} )$ improving on an earlier bound of $O(\\sqrt T \\log (T))$ n the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting\",\"authors\":\"Cem Kalkanli, Ayfer Özgür\",\"doi\":\"10.1109/ISIT44484.2020.9174371\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by$O(\\\\sqrt {T\\\\log (T)} )$ improving on an earlier bound of $O(\\\\sqrt T \\\\log (T))$ n the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.\",\"PeriodicalId\":159311,\"journal\":{\"name\":\"2020 IEEE International Symposium on Information Theory (ISIT)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Symposium on Information Theory (ISIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIT44484.2020.9174371\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT44484.2020.9174371","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
摘要
由于其广泛适用于在线学习问题以及良好的经验和理论表现,汤普森抽样最近引起了人们的极大兴趣。在本文中,我们分析了汤普森采样在典型高斯线性强盗设置下的性能。我们证明了在这种情况下,汤普森抽样的贝叶斯遗憾限为$O(\sqrt {T\log (T)} )$,改进了文献中关于无限紧致作用集的先前的$O(\sqrt T \log (T))$界。我们的证明依赖于柯西-施瓦茨型不等式,它本身就很有趣。
An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting
Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by$O(\sqrt {T\log (T)} )$ improving on an earlier bound of $O(\sqrt T \log (T))$ n the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.