最小化条件数强凸函数之和的梯度复杂性分析

IF 2.6 1区 数学 Q1 MATHEMATICS, APPLIED SIAM Journal on Optimization Pub Date : 2024-04-11 DOI:10.1137/22m1503646
Nuozhou Wang, Shuzhong Zhang
{"title":"最小化条件数强凸函数之和的梯度复杂性分析","authors":"Nuozhou Wang, Shuzhong Zhang","doi":"10.1137/22m1503646","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1374-1401, June 2024. <br/> Abstract. A popular approach to minimizing a finite sum of smooth convex functions is stochastic gradient descent (SGD) and its variants. Fundamental research questions associated with SGD include (i) how to find a lower bound on the number of times that the gradient oracle of each individual function must be assessed in order to find an [math]-minimizer of the overall objective; (ii) how to design algorithms which guarantee finding an [math]-minimizer of the overall objective in expectation no more than a certain number of times (in terms of [math]) that the gradient oracle of each function needs to be assessed (i.e., upper bound). If these two bounds are at the same order of magnitude, then the algorithms may be called optimal. Most existing results along this line of research typically assume that the functions in the objective share the same condition number. In this paper, the first model we study is the problem of minimizing the sum of finitely many strongly convex functions whose condition numbers are all different. We propose an SGD-based method for this model and show that it is optimal in gradient computations, up to a logarithmic factor. We then consider a constrained separate block optimization model and present lower and upper bounds for its gradient computation complexity. Next, we propose solving the Fenchel dual of the constrained block optimization model via generalized SSNM, which we introduce earlier, and show that it yields a lower iteration complexity than solving the original model by the ADMM-type approach. Finally, we extend the analysis to the general composite convex optimization model and obtain gradient-computation complexity results under certain conditions.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Gradient Complexity Analysis for Minimizing the Sum of Strongly Convex Functions with Varying Condition Numbers\",\"authors\":\"Nuozhou Wang, Shuzhong Zhang\",\"doi\":\"10.1137/22m1503646\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1374-1401, June 2024. <br/> Abstract. A popular approach to minimizing a finite sum of smooth convex functions is stochastic gradient descent (SGD) and its variants. Fundamental research questions associated with SGD include (i) how to find a lower bound on the number of times that the gradient oracle of each individual function must be assessed in order to find an [math]-minimizer of the overall objective; (ii) how to design algorithms which guarantee finding an [math]-minimizer of the overall objective in expectation no more than a certain number of times (in terms of [math]) that the gradient oracle of each function needs to be assessed (i.e., upper bound). If these two bounds are at the same order of magnitude, then the algorithms may be called optimal. Most existing results along this line of research typically assume that the functions in the objective share the same condition number. In this paper, the first model we study is the problem of minimizing the sum of finitely many strongly convex functions whose condition numbers are all different. We propose an SGD-based method for this model and show that it is optimal in gradient computations, up to a logarithmic factor. We then consider a constrained separate block optimization model and present lower and upper bounds for its gradient computation complexity. Next, we propose solving the Fenchel dual of the constrained block optimization model via generalized SSNM, which we introduce earlier, and show that it yields a lower iteration complexity than solving the original model by the ADMM-type approach. Finally, we extend the analysis to the general composite convex optimization model and obtain gradient-computation complexity results under certain conditions.\",\"PeriodicalId\":49529,\"journal\":{\"name\":\"SIAM Journal on Optimization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Journal on Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/22m1503646\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1503646","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

SIAM 优化期刊》,第 34 卷,第 2 期,第 1374-1401 页,2024 年 6 月。 摘要。随机梯度下降法(SGD)及其变体是最小化平滑凸函数有限和的一种常用方法。与 SGD 相关的基本研究问题包括:(i) 如何找到为找到总目标的[数学]最小值而必须评估每个单独函数的梯度oracle 的次数的下限;(ii) 如何设计算法,保证在期望值不超过每个函数的梯度oracle 需要评估的一定次数(以[数学]为单位)的情况下找到总目标的[数学]最小值(即上限)。如果这两个界限的数量级相同,那么这些算法就可以称为最优算法。沿着这一研究方向的大多数现有成果通常都假设目标中的函数具有相同的条件数。在本文中,我们研究的第一个模型是最小化条件数都不同的有限多个强凸函数之和的问题。我们针对该模型提出了一种基于 SGD 的方法,并证明该方法在梯度计算中是最优的,最大可达对数因子。然后,我们考虑了一个受约束的独立块优化模型,并提出了其梯度计算复杂度的下限和上限。接下来,我们提出通过广义 SSNM 来求解受限分块优化模型的 Fenchel 对偶,并证明这种方法的迭代复杂度低于通过 ADMM 类型方法求解原始模型的迭代复杂度。最后,我们将分析扩展到一般复合凸优化模型,并在一定条件下得到梯度计算复杂度结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Gradient Complexity Analysis for Minimizing the Sum of Strongly Convex Functions with Varying Condition Numbers
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1374-1401, June 2024.
Abstract. A popular approach to minimizing a finite sum of smooth convex functions is stochastic gradient descent (SGD) and its variants. Fundamental research questions associated with SGD include (i) how to find a lower bound on the number of times that the gradient oracle of each individual function must be assessed in order to find an [math]-minimizer of the overall objective; (ii) how to design algorithms which guarantee finding an [math]-minimizer of the overall objective in expectation no more than a certain number of times (in terms of [math]) that the gradient oracle of each function needs to be assessed (i.e., upper bound). If these two bounds are at the same order of magnitude, then the algorithms may be called optimal. Most existing results along this line of research typically assume that the functions in the objective share the same condition number. In this paper, the first model we study is the problem of minimizing the sum of finitely many strongly convex functions whose condition numbers are all different. We propose an SGD-based method for this model and show that it is optimal in gradient computations, up to a logarithmic factor. We then consider a constrained separate block optimization model and present lower and upper bounds for its gradient computation complexity. Next, we propose solving the Fenchel dual of the constrained block optimization model via generalized SSNM, which we introduce earlier, and show that it yields a lower iteration complexity than solving the original model by the ADMM-type approach. Finally, we extend the analysis to the general composite convex optimization model and obtain gradient-computation complexity results under certain conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
SIAM Journal on Optimization
SIAM Journal on Optimization 数学-应用数学
CiteScore
5.30
自引率
9.70%
发文量
101
审稿时长
6-12 weeks
期刊介绍: The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.
期刊最新文献
Corrigendum and Addendum: Newton Differentiability of Convex Functions in Normed Spaces and of a Class of Operators Newton-Based Alternating Methods for the Ground State of a Class of Multicomponent Bose–Einstein Condensates Minimum Spanning Trees in Infinite Graphs: Theory and Algorithms On Minimal Extended Representations of Generalized Power Cones A Functional Model Method for Nonconvex Nonsmooth Conditional Stochastic Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1