平滑随机优化的回溯逼近法

IF 1.4 3区 数学 Q2 MATHEMATICS, APPLIED Mathematics of Operations Research Pub Date : 2024-09-09 DOI:10.1287/moor.2022.0136
David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip
{"title":"平滑随机优化的回溯逼近法","authors":"David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip","doi":"10.1287/moor.2022.0136","DOIUrl":null,"url":null,"abstract":"Stochastic Gradient (SG) is the de facto iterative technique to solve stochastic optimization (SO) problems with a smooth (nonconvex) objective f and a stochastic first-order oracle. SG’s attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG’s choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA), where, during each iteration, a “deterministic solver” executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus formalizes what is appealing for implementation—during each iteration, “plug in” a solver—for example, L-BFGS line search or Newton-CG—as is, and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L<jats:sub>1</jats:sub> consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sublinear solvers) of RA and identify a practical termination criterion leading to optimal complexity rates. To subsume nonconvex f, we present a certain “random central limit theorem” that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.Funding: R. Pasupathy received financial support from the Office of Naval Research [Grants N000141712295 and 13000991]. R. Bollapragada received financial support from the Lawrence Livermore National Laboratory and the National Science Foundation [Grant NSF DMS 2324643].","PeriodicalId":49852,"journal":{"name":"Mathematics of Operations Research","volume":"182 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Retrospective Approximation Approach for Smooth Stochastic Optimization\",\"authors\":\"David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip\",\"doi\":\"10.1287/moor.2022.0136\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stochastic Gradient (SG) is the de facto iterative technique to solve stochastic optimization (SO) problems with a smooth (nonconvex) objective f and a stochastic first-order oracle. SG’s attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG’s choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA), where, during each iteration, a “deterministic solver” executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus formalizes what is appealing for implementation—during each iteration, “plug in” a solver—for example, L-BFGS line search or Newton-CG—as is, and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L<jats:sub>1</jats:sub> consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sublinear solvers) of RA and identify a practical termination criterion leading to optimal complexity rates. To subsume nonconvex f, we present a certain “random central limit theorem” that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.Funding: R. Pasupathy received financial support from the Office of Naval Research [Grants N000141712295 and 13000991]. R. Bollapragada received financial support from the Lawrence Livermore National Laboratory and the National Science Foundation [Grant NSF DMS 2324643].\",\"PeriodicalId\":49852,\"journal\":{\"name\":\"Mathematics of Operations Research\",\"volume\":\"182 1\",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mathematics of Operations Research\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1287/moor.2022.0136\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematics of Operations Research","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1287/moor.2022.0136","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

随机梯度法(SG)是一种事实上的迭代技术,用于解决具有平滑(非凸)目标 f 和随机一阶神谕的随机优化(SO)问题。SG 之所以吸引人,部分原因在于它简单易行,只需沿负子采样梯度方向执行一步,即可更新现任迭代。在本文中,我们将对 SG 在子样本更新之间执行单步而不是多步的选择提出质疑。我们的研究自然而然地将 SG 推广到了回溯逼近(RA)中,在每次迭代中,"确定性求解器 "都会对一个子样本确定性问题执行可能的多个步骤,并在从统计效率的角度来看认为没有必要继续求解时停止。因此,RA 形式化了对实现有吸引力的东西--在每次迭代中,"插入 "一个求解器--例如 L-BFGS 行搜索或牛顿-CG--只在必要时求解。我们以观测梯度的相对误差为主要对象,建立了一套完整的理论,证明了当样本量以适当速度增加时,RA 的几乎确定性和 L1 一致性在特别弱的条件下得以保持。我们还描述了 RA 的迭代和甲骨文复杂性(对于线性和亚线性求解器),并确定了一个实用的终止准则,以获得最佳复杂性率。为了包含非凸 f,我们提出了某种 "随机中心极限定理",其中包含了曲率对所有一阶临界点的影响,证明了渐近行为是由某种正则混合物描述的。我们的数值实验所传达的信息是,从免除超参数调整的角度来看,RA 以策略性方式纳入现有二阶确定性求解器的能力可能非常重要:R. Pasupathy 获得了海军研究办公室的资金支持[赠款 N000141712295 和 13000991]。R. Bollapragada 获得了劳伦斯-利弗莫尔国家实验室和美国国家科学基金会 [NSF DMS 2324643] 的资助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Retrospective Approximation Approach for Smooth Stochastic Optimization
Stochastic Gradient (SG) is the de facto iterative technique to solve stochastic optimization (SO) problems with a smooth (nonconvex) objective f and a stochastic first-order oracle. SG’s attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG’s choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA), where, during each iteration, a “deterministic solver” executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus formalizes what is appealing for implementation—during each iteration, “plug in” a solver—for example, L-BFGS line search or Newton-CG—as is, and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L1 consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sublinear solvers) of RA and identify a practical termination criterion leading to optimal complexity rates. To subsume nonconvex f, we present a certain “random central limit theorem” that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.Funding: R. Pasupathy received financial support from the Office of Naval Research [Grants N000141712295 and 13000991]. R. Bollapragada received financial support from the Lawrence Livermore National Laboratory and the National Science Foundation [Grant NSF DMS 2324643].
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Mathematics of Operations Research
Mathematics of Operations Research 管理科学-应用数学
CiteScore
3.40
自引率
5.90%
发文量
178
审稿时长
15.0 months
期刊介绍: Mathematics of Operations Research is an international journal of the Institute for Operations Research and the Management Sciences (INFORMS). The journal invites articles concerned with the mathematical and computational foundations in the areas of continuous, discrete, and stochastic optimization; mathematical programming; dynamic programming; stochastic processes; stochastic models; simulation methodology; control and adaptation; networks; game theory; and decision theory. Also sought are contributions to learning theory and machine learning that have special relevance to decision making, operations research, and management science. The emphasis is on originality, quality, and importance; correctness alone is not sufficient. Significant developments in operations research and management science not having substantial mathematical interest should be directed to other journals such as Management Science or Operations Research.
期刊最新文献
Dual Solutions in Convex Stochastic Optimization Exit Game with Private Information A Retrospective Approximation Approach for Smooth Stochastic Optimization The Minimax Property in Infinite Two-Person Win-Lose Games Envy-Free Division of Multilayered Cakes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1