连续随机梯度法:第一部分收敛理论

IF 1.6 2区 数学 Q2 MATHEMATICS, APPLIED Computational Optimization and Applications Pub Date : 2023-11-23 DOI:10.1007/s10589-023-00542-8
Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein
{"title":"连续随机梯度法:第一部分收敛理论","authors":"Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein","doi":"10.1007/s10589-023-00542-8","DOIUrl":null,"url":null,"abstract":"<p>In this contribution, we present a full overview of the <i>continuous stochastic gradient</i> (CSG) method, including convergence results, step size rules and algorithmic insights. We consider optimization problems in which the objective function requires some form of integration, e.g., expected values. Since approximating the integration by a fixed quadrature rule can introduce artificial local solutions into the problem while simultaneously raising the computational effort, stochastic optimization schemes have become increasingly popular in such contexts. However, known stochastic gradient type methods are typically limited to expected risk functions and inherently require many iterations. The latter is particularly problematic, if the evaluation of the cost function involves solving multiple state equations, given, e.g., in form of partial differential equations. To overcome these drawbacks, a recent article introduced the CSG method, which reuses old gradient sample information via the calculation of design dependent integration weights to obtain a better approximation to the full gradient. While in the original CSG paper convergence of a subsequence was established for a diminishing step size, here, we provide a complete convergence analysis of CSG for constant step sizes and an Armijo-type line search. Moreover, new methods to obtain the integration weights are presented, extending the application range of CSG to problems involving higher dimensional integrals and distributed data.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"62 2","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The continuous stochastic gradient method: part I–convergence theory\",\"authors\":\"Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein\",\"doi\":\"10.1007/s10589-023-00542-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In this contribution, we present a full overview of the <i>continuous stochastic gradient</i> (CSG) method, including convergence results, step size rules and algorithmic insights. We consider optimization problems in which the objective function requires some form of integration, e.g., expected values. Since approximating the integration by a fixed quadrature rule can introduce artificial local solutions into the problem while simultaneously raising the computational effort, stochastic optimization schemes have become increasingly popular in such contexts. However, known stochastic gradient type methods are typically limited to expected risk functions and inherently require many iterations. The latter is particularly problematic, if the evaluation of the cost function involves solving multiple state equations, given, e.g., in form of partial differential equations. To overcome these drawbacks, a recent article introduced the CSG method, which reuses old gradient sample information via the calculation of design dependent integration weights to obtain a better approximation to the full gradient. While in the original CSG paper convergence of a subsequence was established for a diminishing step size, here, we provide a complete convergence analysis of CSG for constant step sizes and an Armijo-type line search. Moreover, new methods to obtain the integration weights are presented, extending the application range of CSG to problems involving higher dimensional integrals and distributed data.</p>\",\"PeriodicalId\":55227,\"journal\":{\"name\":\"Computational Optimization and Applications\",\"volume\":\"62 2\",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-11-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Optimization and Applications\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1007/s10589-023-00542-8\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Optimization and Applications","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10589-023-00542-8","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 1

摘要

在这篇文章中,我们全面概述了连续随机梯度(CSG)方法,包括收敛结果、步长规则和算法见解。我们考虑目标函数需要某种形式的积分的优化问题,例如期望值。由于用固定的积分规则逼近积分可以在增加计算量的同时引入人为的局部解,因此随机优化方案在这种情况下越来越受欢迎。然而,已知的随机梯度型方法通常局限于预期的风险函数,并且固有地需要多次迭代。后者尤其成问题,如果成本函数的评估涉及求解多个状态方程,例如以偏微分方程的形式给出。为了克服这些缺点,最近的一篇文章介绍了CSG方法,该方法通过计算设计相关的积分权重来重用旧的梯度样本信息,以获得更好的近似全梯度。虽然在最初的CSG论文中建立了子序列的收敛性,但在这里,我们提供了恒定步长和armijo型线搜索的CSG的完整收敛分析。此外,还提出了新的积分权值获取方法,将CSG的应用范围扩大到涉及高维积分和分布式数据的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The continuous stochastic gradient method: part I–convergence theory

In this contribution, we present a full overview of the continuous stochastic gradient (CSG) method, including convergence results, step size rules and algorithmic insights. We consider optimization problems in which the objective function requires some form of integration, e.g., expected values. Since approximating the integration by a fixed quadrature rule can introduce artificial local solutions into the problem while simultaneously raising the computational effort, stochastic optimization schemes have become increasingly popular in such contexts. However, known stochastic gradient type methods are typically limited to expected risk functions and inherently require many iterations. The latter is particularly problematic, if the evaluation of the cost function involves solving multiple state equations, given, e.g., in form of partial differential equations. To overcome these drawbacks, a recent article introduced the CSG method, which reuses old gradient sample information via the calculation of design dependent integration weights to obtain a better approximation to the full gradient. While in the original CSG paper convergence of a subsequence was established for a diminishing step size, here, we provide a complete convergence analysis of CSG for constant step sizes and an Armijo-type line search. Moreover, new methods to obtain the integration weights are presented, extending the application range of CSG to problems involving higher dimensional integrals and distributed data.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.70
自引率
9.10%
发文量
91
审稿时长
10 months
期刊介绍: Computational Optimization and Applications is a peer reviewed journal that is committed to timely publication of research and tutorial papers on the analysis and development of computational algorithms and modeling technology for optimization. Algorithms either for general classes of optimization problems or for more specific applied problems are of interest. Stochastic algorithms as well as deterministic algorithms will be considered. Papers that can provide both theoretical analysis, along with carefully designed computational experiments, are particularly welcome. Topics of interest include, but are not limited to the following: Large Scale Optimization, Unconstrained Optimization, Linear Programming, Quadratic Programming Complementarity Problems, and Variational Inequalities, Constrained Optimization, Nondifferentiable Optimization, Integer Programming, Combinatorial Optimization, Stochastic Optimization, Multiobjective Optimization, Network Optimization, Complexity Theory, Approximations and Error Analysis, Parametric Programming and Sensitivity Analysis, Parallel Computing, Distributed Computing, and Vector Processing, Software, Benchmarks, Numerical Experimentation and Comparisons, Modelling Languages and Systems for Optimization, Automatic Differentiation, Applications in Engineering, Finance, Optimal Control, Optimal Design, Operations Research, Transportation, Economics, Communications, Manufacturing, and Management Science.
期刊最新文献
A family of conjugate gradient methods with guaranteed positiveness and descent for vector optimization Convergence of a quasi-Newton method for solving systems of nonlinear underdetermined equations Scaled-PAKKT sequential optimality condition for multiobjective problems and its application to an Augmented Lagrangian method A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization Robust approximation of chance constrained optimization with polynomial perturbation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1