{"title":"Simple fixes that accommodate switching costs in multi-armed bandits","authors":"Ehsan Teymourian , Jian Yang","doi":"10.1016/j.ejor.2024.09.017","DOIUrl":null,"url":null,"abstract":"<div><div>When switching costs are added to the multi-armed bandit (MAB) problem where the arms’ random reward distributions are previously unknown, usually quite different techniques than those for pure MAB are required. We find that two simple fixes on the existing upper-confidence-bound (UCB) policy can work well for MAB with switching costs (MAB-SC). Two cases should be distinguished. One is with <em>positive-gap</em> ambiguity where the performance gap between the leading and lagging arms is known to be at least some <span><math><mrow><mi>δ</mi><mo>></mo><mn>0</mn></mrow></math></span>. For this, our fix is to erect barriers that discourage frivolous arm switchings. The other is with <em>zero-gap</em> ambiguity where absolutely nothing is known. We remedy this by forcing the same arms to be pulled in increasingly prolonged intervals. As usual, the effectivenesses of our fixes are measured by the worst average regrets over long time horizons <span><math><mi>T</mi></math></span>. When the barriers are fixed at <span><math><mrow><mi>δ</mi><mo>/</mo><mn>2</mn></mrow></math></span>, we can accomplish a <span><math><mrow><mo>ln</mo><mrow><mo>(</mo><mi>T</mi><mo>)</mo></mrow></mrow></math></span>-sized regret bound for the positive-gap case. When intervals are such that <span><math><mi>n</mi></math></span> of them occupy <span><math><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> periods, we can achieve the best possible <span><math><msup><mrow><mi>T</mi></mrow><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup></math></span>-sized regret bound for the zero-gap case. Other than UCB, these fixes can be applied to a learning while doing (LWD) heuristic to reach satisfactory results as well. While not yet with the best theoretical guarantees, the LWD-based policies have empirically outperformed those based on UCB and other known alternatives. Numerically competitive policies still include ones resulting from interval-based fixes on Thompson sampling (TS).</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"320 3","pages":"Pages 616-627"},"PeriodicalIF":6.0000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Operational Research","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0377221724007203","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
When switching costs are added to the multi-armed bandit (MAB) problem where the arms’ random reward distributions are previously unknown, usually quite different techniques than those for pure MAB are required. We find that two simple fixes on the existing upper-confidence-bound (UCB) policy can work well for MAB with switching costs (MAB-SC). Two cases should be distinguished. One is with positive-gap ambiguity where the performance gap between the leading and lagging arms is known to be at least some . For this, our fix is to erect barriers that discourage frivolous arm switchings. The other is with zero-gap ambiguity where absolutely nothing is known. We remedy this by forcing the same arms to be pulled in increasingly prolonged intervals. As usual, the effectivenesses of our fixes are measured by the worst average regrets over long time horizons . When the barriers are fixed at , we can accomplish a -sized regret bound for the positive-gap case. When intervals are such that of them occupy periods, we can achieve the best possible -sized regret bound for the zero-gap case. Other than UCB, these fixes can be applied to a learning while doing (LWD) heuristic to reach satisfactory results as well. While not yet with the best theoretical guarantees, the LWD-based policies have empirically outperformed those based on UCB and other known alternatives. Numerically competitive policies still include ones resulting from interval-based fixes on Thompson sampling (TS).
期刊介绍:
The European Journal of Operational Research (EJOR) publishes high quality, original papers that contribute to the methodology of operational research (OR) and to the practice of decision making.