Stochastic Trust-Region and Direct-Search Methods: A Weak Tail Bound Condition and Reduced Sample Sizing

IF 2.6 1区 数学 Q1 MATHEMATICS, APPLIED SIAM Journal on Optimization Pub Date : 2024-06-14 DOI:10.1137/22m1543446
F. Rinaldi, L. N. Vicente, D. Zeffiro
{"title":"Stochastic Trust-Region and Direct-Search Methods: A Weak Tail Bound Condition and Reduced Sample Sizing","authors":"F. Rinaldi, L. N. Vicente, D. Zeffiro","doi":"10.1137/22m1543446","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 2067-2092, June 2024. <br/> Abstract. Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization (SDFO) which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially nonsmooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize. Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way that the number of samples needed per iteration is reduced. In previous works, the number of samples necessary for global convergence at every iteration [math] of this type of algorithm was [math], where [math] is the stepsize or trust-region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only [math], where [math] can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. In the common random number generator setting, a further improvement by a factor of [math] can be obtained. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1543446","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

SIAM Journal on Optimization, Volume 34, Issue 2, Page 2067-2092, June 2024.
Abstract. Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization (SDFO) which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially nonsmooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize. Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way that the number of samples needed per iteration is reduced. In previous works, the number of samples necessary for global convergence at every iteration [math] of this type of algorithm was [math], where [math] is the stepsize or trust-region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only [math], where [math] can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. In the common random number generator setting, a further improvement by a factor of [math] can be obtained. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
随机信任区域和直接搜索方法:弱尾边界条件和减少样本大小
SIAM 优化期刊》,第 34 卷第 2 期,第 2067-2092 页,2024 年 6 月。 摘要。利用尾边界,我们为随机无导数优化(SDFO)中的函数估计引入了一个新的概率条件,从而减少了样本数量并简化了算法分析。此外,我们还开发了简单的随机直接搜索和信任区域方法,用于优化潜在的非光滑函数,该函数的值只能通过随机观测进行估计。要接受试验点,这些算法要求估计的函数值产生足够的下降,下降的幂大于算法步长的 1。我们的新尾部约束条件正是强加在用于实现这种充分下降的缩减估计值上的。通过这一条件,我们可以选择用于充分减小的步长幂,从而减少每次迭代所需的样本数量。在以前的研究中,这类算法每次迭代[math]时全局收敛所需的样本数为[math],其中[math]为步长或信任区域半径。然而,利用新的尾界条件,并在对噪声的温和假设下,我们可以证明这样的样本数仅为[math],其中[math]可以通过在充分减小检验中选择任意接近 1 的步长幂来任意变小。在新的尾界条件下,建立了随机直接搜索算法和信任区域算法的全局收敛特性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
SIAM Journal on Optimization
SIAM Journal on Optimization 数学-应用数学
CiteScore
5.30
自引率
9.70%
发文量
101
审稿时长
6-12 weeks
期刊介绍: The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.
期刊最新文献
Corrigendum and Addendum: Newton Differentiability of Convex Functions in Normed Spaces and of a Class of Operators Newton-Based Alternating Methods for the Ground State of a Class of Multicomponent Bose–Einstein Condensates Minimum Spanning Trees in Infinite Graphs: Theory and Algorithms On Minimal Extended Representations of Generalized Power Cones A Functional Model Method for Nonconvex Nonsmooth Conditional Stochastic Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1