Accelerated stochastic approximation with state-dependent noise

IF 2.2 2区 数学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Mathematical Programming Pub Date : 2024-08-27 DOI:10.1007/s10107-024-02138-4
Sasila Ilandarideva, Anatoli Juditsky, Guanghui Lan, Tianjiao Li
{"title":"Accelerated stochastic approximation with state-dependent noise","authors":"Sasila Ilandarideva, Anatoli Juditsky, Guanghui Lan, Tianjiao Li","doi":"10.1007/s10107-024-02138-4","DOIUrl":null,"url":null,"abstract":"<p>We consider a class of stochastic smooth convex optimization problems under rather general assumptions on the noise in the stochastic gradient observation. As opposed to the classical problem setting in which the variance of noise is assumed to be uniformly bounded, herein we assume that the variance of stochastic gradients is related to the “sub-optimality” of the approximate solutions delivered by the algorithm. Such problems naturally arise in a variety of applications, in particular, in the well-known generalized linear regression problem in statistics. However, to the best of our knowledge, none of the existing stochastic approximation algorithms for solving this class of problems attain optimality in terms of the dependence on accuracy, problem parameters, and mini-batch size. We discuss two non-Euclidean accelerated stochastic approximation routines—stochastic accelerated gradient descent (SAGD) and stochastic gradient extrapolation (SGE)—which carry a particular duality relationship. We show that both SAGD and SGE, under appropriate conditions, achieve the optimal convergence rate, attaining the optimal iteration and sample complexities simultaneously. However, corresponding assumptions for the SGE algorithm are more general; they allow, for instance, for efficient application of the SGE to statistical estimation problems under heavy tail noises and discontinuous score functions. We also discuss the application of the SGE to problems satisfying quadratic growth conditions, and show how it can be used to recover sparse solutions. Finally, we report on some simulation experiments to illustrate numerical performance of our proposed algorithms in high-dimensional settings.</p>","PeriodicalId":18297,"journal":{"name":"Mathematical Programming","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical Programming","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10107-024-02138-4","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

We consider a class of stochastic smooth convex optimization problems under rather general assumptions on the noise in the stochastic gradient observation. As opposed to the classical problem setting in which the variance of noise is assumed to be uniformly bounded, herein we assume that the variance of stochastic gradients is related to the “sub-optimality” of the approximate solutions delivered by the algorithm. Such problems naturally arise in a variety of applications, in particular, in the well-known generalized linear regression problem in statistics. However, to the best of our knowledge, none of the existing stochastic approximation algorithms for solving this class of problems attain optimality in terms of the dependence on accuracy, problem parameters, and mini-batch size. We discuss two non-Euclidean accelerated stochastic approximation routines—stochastic accelerated gradient descent (SAGD) and stochastic gradient extrapolation (SGE)—which carry a particular duality relationship. We show that both SAGD and SGE, under appropriate conditions, achieve the optimal convergence rate, attaining the optimal iteration and sample complexities simultaneously. However, corresponding assumptions for the SGE algorithm are more general; they allow, for instance, for efficient application of the SGE to statistical estimation problems under heavy tail noises and discontinuous score functions. We also discuss the application of the SGE to problems satisfying quadratic growth conditions, and show how it can be used to recover sparse solutions. Finally, we report on some simulation experiments to illustrate numerical performance of our proposed algorithms in high-dimensional settings.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
带有状态相关噪声的加速随机逼近
我们考虑了一类随机平滑凸优化问题,并对随机梯度观测中的噪声作了相当宽泛的假设。在经典问题中,噪声方差被假定为均匀有界,而在这里,我们假定随机梯度的方差与算法提供的近似解的 "次优性 "有关。这种问题自然会在各种应用中出现,特别是在统计学中著名的广义线性回归问题中。然而,据我们所知,现有的解决这类问题的随机近似算法中,没有一种能在精度、问题参数和小批量规模的依赖性方面达到最优。我们讨论了两种非欧几里得加速随机逼近例程--随机加速梯度下降算法(SAGD)和随机梯度外推法(SGE),这两种算法具有特殊的对偶关系。我们的研究表明,在适当条件下,SAGD 和 SGE 都能达到最佳收敛速度,同时获得最佳迭代和样本复杂度。然而,SGE 算法的相应假设更为宽泛;例如,它们允许 SGE 有效地应用于重尾噪声和不连续得分函数下的统计估计问题。我们还讨论了 SGE 在满足二次增长条件的问题中的应用,并展示了它如何用于恢复稀疏解。最后,我们报告了一些模拟实验,以说明我们提出的算法在高维环境下的数值性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Mathematical Programming
Mathematical Programming 数学-计算机:软件工程
CiteScore
5.70
自引率
11.10%
发文量
160
审稿时长
4-8 weeks
期刊介绍: Mathematical Programming publishes original articles dealing with every aspect of mathematical optimization; that is, everything of direct or indirect use concerning the problem of optimizing a function of many variables, often subject to a set of constraints. This involves theoretical and computational issues as well as application studies. Included, along with the standard topics of linear, nonlinear, integer, conic, stochastic and combinatorial optimization, are techniques for formulating and applying mathematical programming models, convex, nonsmooth and variational analysis, the theory of polyhedra, variational inequalities, and control and game theory viewed from the perspective of mathematical programming.
期刊最新文献
Fast convergence to non-isolated minima: four equivalent conditions for $${\textrm{C}^{2}}$$ functions Complexity of chordal conversion for sparse semidefinite programs with small treewidth Recycling valid inequalities for robust combinatorial optimization with budgeted uncertainty Accelerated stochastic approximation with state-dependent noise Nonlinear conjugate gradient methods: worst-case convergence rates via computer-assisted analyses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1