增长条件下加速随机梯度下降的收敛分析

IF 1.4 3区 数学 Q2 MATHEMATICS, APPLIED Mathematics of Operations Research Pub Date : 2023-12-06 DOI:10.1287/moor.2021.0293
You-Lin Chen, Sen Na, Mladen Kolar
{"title":"增长条件下加速随机梯度下降的收敛分析","authors":"You-Lin Chen, Sen Na, Mladen Kolar","doi":"10.1287/moor.2021.0293","DOIUrl":null,"url":null,"abstract":"We study the convergence of accelerated stochastic gradient descent (SGD) for strongly convex objectives under the growth condition, which states that the variance of stochastic gradient is bounded by a multiplicative part that grows with the full gradient and a constant additive part. Through the lens of the growth condition, we investigate four widely used accelerated methods: Nesterov’s accelerated method (NAM), robust momentum method (RMM), accelerated dual averaging method (DAM+), and implicit DAM+ (iDAM+). Although these methods are known to improve the convergence rate of SGD under the condition that the stochastic gradient has bounded variance, it is not well understood how their convergence rates are affected by the multiplicative noise. In this paper, we show that these methods all converge to a neighborhood of the optimum with accelerated convergence rates (compared with SGD), even under the growth condition. In particular, NAM, RMM, and iDAM+ enjoy acceleration only with a mild multiplicative noise, whereas DAM+ enjoys acceleration, even with a large multiplicative noise. Furthermore, we propose a generic tail-averaged scheme that allows the accelerated rates of DAM+ and iDAM+ to nearly attain the theoretical lower bound (up to a logarithmic factor in the variance term). We conduct numerical experiments to support our theoretical conclusions.","PeriodicalId":49852,"journal":{"name":"Mathematics of Operations Research","volume":"24 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Convergence Analysis of Accelerated Stochastic Gradient Descent Under the Growth Condition\",\"authors\":\"You-Lin Chen, Sen Na, Mladen Kolar\",\"doi\":\"10.1287/moor.2021.0293\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We study the convergence of accelerated stochastic gradient descent (SGD) for strongly convex objectives under the growth condition, which states that the variance of stochastic gradient is bounded by a multiplicative part that grows with the full gradient and a constant additive part. Through the lens of the growth condition, we investigate four widely used accelerated methods: Nesterov’s accelerated method (NAM), robust momentum method (RMM), accelerated dual averaging method (DAM+), and implicit DAM+ (iDAM+). Although these methods are known to improve the convergence rate of SGD under the condition that the stochastic gradient has bounded variance, it is not well understood how their convergence rates are affected by the multiplicative noise. In this paper, we show that these methods all converge to a neighborhood of the optimum with accelerated convergence rates (compared with SGD), even under the growth condition. In particular, NAM, RMM, and iDAM+ enjoy acceleration only with a mild multiplicative noise, whereas DAM+ enjoys acceleration, even with a large multiplicative noise. Furthermore, we propose a generic tail-averaged scheme that allows the accelerated rates of DAM+ and iDAM+ to nearly attain the theoretical lower bound (up to a logarithmic factor in the variance term). We conduct numerical experiments to support our theoretical conclusions.\",\"PeriodicalId\":49852,\"journal\":{\"name\":\"Mathematics of Operations Research\",\"volume\":\"24 1\",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2023-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mathematics of Operations Research\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1287/moor.2021.0293\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematics of Operations Research","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1287/moor.2021.0293","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 3

摘要

我们研究了在增长条件下强凸目标的加速随机梯度下降(SGD)的收敛性。增长条件是指随机梯度的方差由一个随全梯度增长的乘法部分和一个恒定的加法部分限定。通过增长条件的视角,我们研究了四种广泛使用的加速方法:涅斯捷罗夫加速法 (NAM)、鲁棒性动量法 (RMM)、加速二元平均法 (DAM+) 和隐式 DAM+ (iDAM+)。众所周知,在随机梯度方差有界的条件下,这些方法能提高 SGD 的收敛速度,但它们的收敛速度如何受到乘法噪声的影响,目前还不十分清楚。本文表明,即使在增长条件下,这些方法都能以更快的收敛速度(与 SGD 相比)收敛到最优值附近。特别是,NAM、RMM 和 iDAM+ 只在有轻微乘法噪声时才会加速收敛,而 DAM+ 即使在有较大乘法噪声时也会加速收敛。此外,我们还提出了一种通用的尾平均方案,使 DAM+ 和 iDAM+ 的加速率几乎达到理论下限(方差项达到对数因子)。我们进行了数值实验来支持我们的理论结论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Convergence Analysis of Accelerated Stochastic Gradient Descent Under the Growth Condition
We study the convergence of accelerated stochastic gradient descent (SGD) for strongly convex objectives under the growth condition, which states that the variance of stochastic gradient is bounded by a multiplicative part that grows with the full gradient and a constant additive part. Through the lens of the growth condition, we investigate four widely used accelerated methods: Nesterov’s accelerated method (NAM), robust momentum method (RMM), accelerated dual averaging method (DAM+), and implicit DAM+ (iDAM+). Although these methods are known to improve the convergence rate of SGD under the condition that the stochastic gradient has bounded variance, it is not well understood how their convergence rates are affected by the multiplicative noise. In this paper, we show that these methods all converge to a neighborhood of the optimum with accelerated convergence rates (compared with SGD), even under the growth condition. In particular, NAM, RMM, and iDAM+ enjoy acceleration only with a mild multiplicative noise, whereas DAM+ enjoys acceleration, even with a large multiplicative noise. Furthermore, we propose a generic tail-averaged scheme that allows the accelerated rates of DAM+ and iDAM+ to nearly attain the theoretical lower bound (up to a logarithmic factor in the variance term). We conduct numerical experiments to support our theoretical conclusions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Mathematics of Operations Research
Mathematics of Operations Research 管理科学-应用数学
CiteScore
3.40
自引率
5.90%
发文量
178
审稿时长
15.0 months
期刊介绍: Mathematics of Operations Research is an international journal of the Institute for Operations Research and the Management Sciences (INFORMS). The journal invites articles concerned with the mathematical and computational foundations in the areas of continuous, discrete, and stochastic optimization; mathematical programming; dynamic programming; stochastic processes; stochastic models; simulation methodology; control and adaptation; networks; game theory; and decision theory. Also sought are contributions to learning theory and machine learning that have special relevance to decision making, operations research, and management science. The emphasis is on originality, quality, and importance; correctness alone is not sufficient. Significant developments in operations research and management science not having substantial mathematical interest should be directed to other journals such as Management Science or Operations Research.
期刊最新文献
Dual Solutions in Convex Stochastic Optimization Exit Game with Private Information A Retrospective Approximation Approach for Smooth Stochastic Optimization The Minimax Property in Infinite Two-Person Win-Lose Games Envy-Free Division of Multilayered Cakes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1