{"title":"Optimally Tuning Finite-Difference Estimators","authors":"Haidong Li, H. Lam","doi":"10.1109/WSC48552.2020.9384002","DOIUrl":null,"url":null,"abstract":"We consider stochastic gradient estimation when only noisy function evaluations are available. Central finite-difference scheme is a common method in this setting, which involves generating samples under perturbed inputs. Though it is widely known how to select the perturbation size to achieve the optimal order of the error, exactly achieving the optimal first-order error, which we call asymptotic optimality, is considered much more challenging and not attempted in practice. In this paper, we provide evidence that designing asymptotically optimal estimator is practically possible. In particular, we propose a new two-stage scheme that first estimates the required parameter in the perturbation size, followed by running finite-difference based on the estimated parameter in the first stage. Both theory and numerical experiments demonstrate the optimality of the proposed estimator and the robustness over conventional finite-difference schemes based on ad hoc tuning.","PeriodicalId":6692,"journal":{"name":"2020 Winter Simulation Conference (WSC)","volume":"51 1","pages":"457-468"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Winter Simulation Conference (WSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WSC48552.2020.9384002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We consider stochastic gradient estimation when only noisy function evaluations are available. Central finite-difference scheme is a common method in this setting, which involves generating samples under perturbed inputs. Though it is widely known how to select the perturbation size to achieve the optimal order of the error, exactly achieving the optimal first-order error, which we call asymptotic optimality, is considered much more challenging and not attempted in practice. In this paper, we provide evidence that designing asymptotically optimal estimator is practically possible. In particular, we propose a new two-stage scheme that first estimates the required parameter in the perturbation size, followed by running finite-difference based on the estimated parameter in the first stage. Both theory and numerical experiments demonstrate the optimality of the proposed estimator and the robustness over conventional finite-difference schemes based on ad hoc tuning.