{"title":"Stochastic Trust-Region and Direct-Search Methods: A Weak Tail Bound Condition and Reduced Sample Sizing","authors":"F. Rinaldi, L. N. Vicente, D. Zeffiro","doi":"10.1137/22m1543446","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 2067-2092, June 2024. <br/> Abstract. Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization (SDFO) which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially nonsmooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize. Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way that the number of samples needed per iteration is reduced. In previous works, the number of samples necessary for global convergence at every iteration [math] of this type of algorithm was [math], where [math] is the stepsize or trust-region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only [math], where [math] can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. In the common random number generator setting, a further improvement by a factor of [math] can be obtained. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1543446","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
SIAM Journal on Optimization, Volume 34, Issue 2, Page 2067-2092, June 2024. Abstract. Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization (SDFO) which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially nonsmooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize. Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way that the number of samples needed per iteration is reduced. In previous works, the number of samples necessary for global convergence at every iteration [math] of this type of algorithm was [math], where [math] is the stepsize or trust-region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only [math], where [math] can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. In the common random number generator setting, a further improvement by a factor of [math] can be obtained. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition.
期刊介绍:
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.