{"title":"Stochastic Trust-Region Algorithm in Random Subspaces with Convergence and Expected Complexity Analyses","authors":"K. J. Dzahini, S. M. Wild","doi":"10.1137/22m1524072","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 3, Page 2671-2699, September 2024. <br/> Abstract. This work proposes a framework for large-scale stochastic derivative-free optimization (DFO) by introducing STARS, a trust-region method based on iterative minimization in random subspaces. This framework is both an algorithmic and theoretical extension of a random subspace derivative-free optimization (RSDFO) framework, and an algorithm for stochastic optimization with random models (STORM). Moreover, like RSDFO, STARS achieves scalability by minimizing interpolation models that approximate the objective in low-dimensional affine subspaces, thus significantly reducing per-iteration costs in terms of function evaluations and yielding strong performance on large-scale stochastic DFO problems. The user-determined dimension of these subspaces, when the latter are defined, for example, by the columns of so-called Johnson–Lindenstrauss transforms, turns out to be independent of the dimension of the problem. For convergence purposes, inspired by the analyses of RSDFO and STORM, both a particular quality of the subspace and the accuracies of random function estimates and models are required to hold with sufficiently high, but fixed, probabilities. Using martingale theory under the latter assumptions, an almost sure global convergence of STARS to a first-order stationary point is shown, and the expected number of iterations required to reach a desired first-order accuracy is proved to be similar to that of STORM and other stochastic DFO algorithms, up to constants.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1524072","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
SIAM Journal on Optimization, Volume 34, Issue 3, Page 2671-2699, September 2024. Abstract. This work proposes a framework for large-scale stochastic derivative-free optimization (DFO) by introducing STARS, a trust-region method based on iterative minimization in random subspaces. This framework is both an algorithmic and theoretical extension of a random subspace derivative-free optimization (RSDFO) framework, and an algorithm for stochastic optimization with random models (STORM). Moreover, like RSDFO, STARS achieves scalability by minimizing interpolation models that approximate the objective in low-dimensional affine subspaces, thus significantly reducing per-iteration costs in terms of function evaluations and yielding strong performance on large-scale stochastic DFO problems. The user-determined dimension of these subspaces, when the latter are defined, for example, by the columns of so-called Johnson–Lindenstrauss transforms, turns out to be independent of the dimension of the problem. For convergence purposes, inspired by the analyses of RSDFO and STORM, both a particular quality of the subspace and the accuracies of random function estimates and models are required to hold with sufficiently high, but fixed, probabilities. Using martingale theory under the latter assumptions, an almost sure global convergence of STARS to a first-order stationary point is shown, and the expected number of iterations required to reach a desired first-order accuracy is proved to be similar to that of STORM and other stochastic DFO algorithms, up to constants.
期刊介绍:
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.