首页 > 最新文献

Computational Optimization and Applications最新文献

英文 中文
Enhancements of discretization approaches for non-convex mixed-integer quadratically constrained quadratic programming: Part I 非凸混合整数二次约束二次编程离散化方法的改进:第一部分
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-01-30 DOI: 10.1007/s10589-023-00543-7
Benjamin Beach, Robert Burlacu, Andreas Bärmann, Lukas Hager, Robert Hildebrand

We study mixed-integer programming (MIP) relaxation techniques for the solution of non-convex mixed-integer quadratically constrained quadratic programs (MIQCQPs). We present MIP relaxation methods for non-convex continuous variable products. In this paper, we consider MIP relaxations based on separable reformulation. The main focus is the introduction of the enhanced separable MIP relaxation for non-convex quadratic products of the form (z=xy), called hybrid separable (HybS). Additionally, we introduce a logarithmic MIP relaxation for univariate quadratic terms, called sawtooth relaxation, based on Beach (Beach in J Glob Optim 84:869–912, 2022). We combine the latter with HybS and existing separable reformulations to derive MIP relaxations of MIQCQPs. We provide a comprehensive theoretical analysis of these techniques, underlining the theoretical advantages of HybS compared to its predecessors. We perform a broad computational study to demonstrate the effectiveness of the enhanced MIP relaxation in terms of producing tight dual bounds for MIQCQPs. In Part II, we study MIP relaxations that extend the MIP relaxation normalized multiparametric disaggregation technique (NMDT) (Castro in J Glob Optim 64:765–784, 2015) and present a computational study which also includes the MIP relaxations from this work and compares them with a state-of-the-art of MIQCQP solvers.

我们研究了求解非凸混合整数二次约束四元程序(MIQCQPs)的混合整数编程(MIP)松弛技术。我们提出了非凸连续变量积的 MIP 松弛方法。在本文中,我们考虑了基于可分离重构的 MIP 松弛方法。本文的重点是针对形式为 (z=xy/)的非凸二次型积引入增强型可分离 MIP 放松,即混合可分离(HybS)。此外,我们基于 Beach(Beach in J Glob Optim 84:869-912, 2022)引入了单变量二次项的对数 MIP 松弛,称为锯齿松弛。我们将后者与 HybS 和现有的可分离重构相结合,推导出 MIQCQPs 的 MIP 松弛。我们对这些技术进行了全面的理论分析,强调了 HybS 与其前辈相比的理论优势。我们进行了广泛的计算研究,证明了增强型 MIP 松弛在为 MIQCQPs 生成严格的对偶边界方面的有效性。在第二部分,我们研究了扩展MIP松弛归一化多参数分解技术(NMDT)的MIP松弛(Castro,载于J Glob Optim 64:765-784,2015),并介绍了一项计算研究,其中也包括这项工作中的MIP松弛,并将它们与最先进的MIQCQP求解器进行了比较。
{"title":"Enhancements of discretization approaches for non-convex mixed-integer quadratically constrained quadratic programming: Part I","authors":"Benjamin Beach, Robert Burlacu, Andreas Bärmann, Lukas Hager, Robert Hildebrand","doi":"10.1007/s10589-023-00543-7","DOIUrl":"https://doi.org/10.1007/s10589-023-00543-7","url":null,"abstract":"<p>We study mixed-integer programming (MIP) relaxation techniques for the solution of non-convex mixed-integer quadratically constrained quadratic programs (MIQCQPs). We present MIP relaxation methods for non-convex continuous variable products. In this paper, we consider MIP relaxations based on separable reformulation. The main focus is the introduction of the enhanced separable MIP relaxation for non-convex quadratic products of the form <span>(z=xy)</span>, called <i>hybrid separable</i> (HybS). Additionally, we introduce a logarithmic MIP relaxation for univariate quadratic terms, called <i>sawtooth relaxation</i>, based on Beach (Beach in J Glob Optim 84:869–912, 2022). We combine the latter with HybS and existing separable reformulations to derive MIP relaxations of MIQCQPs. We provide a comprehensive theoretical analysis of these techniques, underlining the theoretical advantages of HybS compared to its predecessors. We perform a broad computational study to demonstrate the effectiveness of the enhanced MIP relaxation in terms of producing tight dual bounds for MIQCQPs. In Part II, we study MIP relaxations that extend the MIP relaxation <i>normalized multiparametric disaggregation technique</i> (NMDT) (Castro in J Glob Optim 64:765–784, 2015) and present a computational study which also includes the MIP relaxations from this work and compares them with a state-of-the-art of MIQCQP solvers.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"62 4 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139646514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alternative extension of the Hager–Zhang conjugate gradient method for vector optimization 用于矢量优化的哈格-张共轭梯度法的替代扩展
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-01-24 DOI: 10.1007/s10589-023-00548-2
Qingjie Hu, Liping Zhu, Yu Chen

Recently, Gonçalves and Prudente proposed an extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization (Comput Optim Appl 76:889–916, 2020). They initially demonstrated that directly extending the Hager–Zhang method for vector optimization may not result in descent in the vector sense, even when employing an exact line search. By utilizing a sufficiently accurate line search, they subsequently introduced a self-adjusting Hager–Zhang conjugate gradient method in the vector sense. The global convergence of this new scheme was proven without requiring regular restarts or any convex assumptions. In this paper, we propose an alternative extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization that preserves its desirable scalar property, i.e., ensuring sufficiently descent without relying on any line search or convexity assumption. Furthermore, we investigate its global convergence with the Wolfe line search under mild assumptions. Finally, numerical experiments are presented to illustrate the practical behavior of our proposed method.

最近,Gonçalves 和 Prudente 提出了针对矢量优化的 Hager-Zhang 非线性共轭梯度法的扩展方法(Comput Optim Appl 76:889-916, 2020)。他们初步证明,直接将 Hager-Zhang 方法扩展用于矢量优化,即使采用精确的线性搜索,也可能无法实现矢量意义上的下降。通过使用足够精确的直线搜索,他们随后引入了矢量意义上的自调整哈格-张共轭梯度法。这一新方案的全局收敛性已得到证明,无需定期重启或任何凸假设。在本文中,我们提出了哈格-张非线性共轭梯度法在矢量优化方面的另一种扩展,它保留了其理想的标量特性,即无需依赖任何线性搜索或凸性假设即可确保充分下降。此外,我们还研究了在温和的假设条件下,该方法与沃尔夫线性搜索的全局收敛性。最后,我们通过数值实验来说明我们提出的方法的实用性。
{"title":"Alternative extension of the Hager–Zhang conjugate gradient method for vector optimization","authors":"Qingjie Hu, Liping Zhu, Yu Chen","doi":"10.1007/s10589-023-00548-2","DOIUrl":"https://doi.org/10.1007/s10589-023-00548-2","url":null,"abstract":"<p>Recently, Gonçalves and Prudente proposed an extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization (Comput Optim Appl 76:889–916, 2020). They initially demonstrated that directly extending the Hager–Zhang method for vector optimization may not result in descent in the vector sense, even when employing an exact line search. By utilizing a sufficiently accurate line search, they subsequently introduced a self-adjusting Hager–Zhang conjugate gradient method in the vector sense. The global convergence of this new scheme was proven without requiring regular restarts or any convex assumptions. In this paper, we propose an alternative extension of the Hager–Zhang nonlinear conjugate gradient method for vector optimization that preserves its desirable scalar property, i.e., ensuring sufficiently descent without relying on any line search or convexity assumption. Furthermore, we investigate its global convergence with the Wolfe line search under mild assumptions. Finally, numerical experiments are presented to illustrate the practical behavior of our proposed method.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"11 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139557926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A family of Barzilai-Borwein steplengths from the viewpoint of scaled total least squares 从按比例总最小二乘法的角度看 Barzilai-Borwein 步长族
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2024-01-18 DOI: 10.1007/s10589-023-00546-4
Shiru Li, Tao Zhang, Yong Xia

The Barzilai-Borwein (BB) steplengths play great roles in practical gradient methods for solving unconstrained optimization problems. Motivated by the observation that the two well-known BB steplengths correspond to the ordinary and the data least squares, respectively, we introduce a novel family of BB steplengths from the viewpoint of scaled total least squares. Numerical experiments demonstrate that high performance can be received by a carefully-selected BB steplength in the new family.

Barzilai-Borwein(BB)步长在解决无约束优化问题的实际梯度方法中发挥着重要作用。观察到两种著名的 BB 步长分别对应于普通最小二乘法和数据最小二乘法,受此启发,我们从比例总最小二乘法的角度引入了一种新的 BB 步长系列。数值实验证明,在新系列中精心选择 BB 步长可以获得很高的性能。
{"title":"A family of Barzilai-Borwein steplengths from the viewpoint of scaled total least squares","authors":"Shiru Li, Tao Zhang, Yong Xia","doi":"10.1007/s10589-023-00546-4","DOIUrl":"https://doi.org/10.1007/s10589-023-00546-4","url":null,"abstract":"<p>The Barzilai-Borwein (BB) steplengths play great roles in practical gradient methods for solving unconstrained optimization problems. Motivated by the observation that the two well-known BB steplengths correspond to the ordinary and the data least squares, respectively, we introduce a novel family of BB steplengths from the viewpoint of scaled total least squares. Numerical experiments demonstrate that high performance can be received by a carefully-selected BB steplength in the new family.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet traffic tensor completion with tensor nuclear norm 用张量核规范完成互联网流量张量
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2023-12-21 DOI: 10.1007/s10589-023-00545-5
Can Li, Yannan Chen, Dong-hui Li
{"title":"Internet traffic tensor completion with tensor nuclear norm","authors":"Can Li, Yannan Chen, Dong-hui Li","doi":"10.1007/s10589-023-00545-5","DOIUrl":"https://doi.org/10.1007/s10589-023-00545-5","url":null,"abstract":"","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"20 7","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138952785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to: The continuous stochastic gradient method: part II–application and numerics 更正:连续随机梯度法:第二部分--应用和数值计算
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2023-12-13 DOI: 10.1007/s10589-023-00544-6
Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein
{"title":"Correction to: The continuous stochastic gradient method: part II–application and numerics","authors":"Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein","doi":"10.1007/s10589-023-00544-6","DOIUrl":"https://doi.org/10.1007/s10589-023-00544-6","url":null,"abstract":"","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"26 6","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bregman–Kaczmarz method for nonlinear systems of equations 非线性方程组的 Bregman-Kaczmarz 方法
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2023-12-07 DOI: 10.1007/s10589-023-00541-9
Robert Gower, Dirk A. Lorenz, Maximilian Winkler

We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.

我们提出了一种求解非线性方程组的新随机方法,它可以找到稀疏解或某些简单约束条件下的解。该方法只取分量函数的梯度,并使用布雷格曼投影到牛顿方程的解空间。在欧几里得投影的特殊情况下,该方法被称为非线性 Kaczmarz 法。此外,如果分量函数是非负的,我们就处于插值假设下的优化环境中,该方法就简化为使用最近提出的随机 Polyak 步长的 SGD 方法。对于一般的布雷格曼投影,我们的方法是一种具有新颖自适应步长的随机镜像下降法。我们证明,与标准 Polyak 步长相比,我们的方法在凹凸环境中的每次迭代都能使精确解的 Bregman 距离更小。我们对 Bregman 投影的推广是有代价的,即每次迭代都需要解决凸一维优化问题。这通常可以通过全局化牛顿迭代来实现。收敛性在两种经典的非线性设置中得到了证明:凸非负函数和满足切向锥条件的函数的局部收敛性。最后,我们举例说明了所提出的方法在内存要求相同的情况下优于类似方法。
{"title":"A Bregman–Kaczmarz method for nonlinear systems of equations","authors":"Robert Gower, Dirk A. Lorenz, Maximilian Winkler","doi":"10.1007/s10589-023-00541-9","DOIUrl":"https://doi.org/10.1007/s10589-023-00541-9","url":null,"abstract":"<p>We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"22 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138557060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The continuous stochastic gradient method: part II–application and numerics 连续随机梯度法:第二部分-应用与数值
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2023-11-24 DOI: 10.1007/s10589-023-00540-w
Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein

In this contribution, we present a numerical analysis of the continuous stochastic gradient (CSG) method, including applications from topology optimization and convergence rates. In contrast to standard stochastic gradient optimization schemes, CSG does not discard old gradient samples from previous iterations. Instead, design dependent integration weights are calculated to form a convex combination as an approximation to the true gradient at the current design. As the approximation error vanishes in the course of the iterations, CSG represents a hybrid approach, starting off like a purely stochastic method and behaving like a full gradient scheme in the limit. In this work, the efficiency of CSG is demonstrated for practically relevant applications from topology optimization. These settings are characterized by both, a large number of optimization variables and an objective function, whose evaluation requires the numerical computation of multiple integrals concatenated in a nonlinear fashion. Such problems could not be solved by any existing optimization method before. Lastly, with regards to convergence rates, first estimates are provided and confirmed with the help of numerical experiments.

在这篇贡献中,我们提出了连续随机梯度(CSG)方法的数值分析,包括拓扑优化和收敛速度的应用。与标准的随机梯度优化方案相比,CSG不会丢弃以前迭代的旧梯度样本。相反,计算与设计相关的积分权重以形成一个凸组合,作为当前设计中真实梯度的近似值。由于近似误差在迭代过程中逐渐消失,所以CSG是一种混合方法,一开始像纯随机方法,在极限情况下表现为全梯度方案。在本工作中,从拓扑优化的角度证明了CSG在实际应用中的有效性。这些设置的特点是,大量的优化变量和目标函数,其评估需要以非线性方式串联多个积分的数值计算。这些问题是现有的任何优化方法都无法解决的。最后,给出了收敛速度的初步估计,并通过数值实验进行了验证。
{"title":"The continuous stochastic gradient method: part II–application and numerics","authors":"Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein","doi":"10.1007/s10589-023-00540-w","DOIUrl":"https://doi.org/10.1007/s10589-023-00540-w","url":null,"abstract":"<p>In this contribution, we present a numerical analysis of the <i>continuous stochastic gradient</i> (CSG) method, including applications from topology optimization and convergence rates. In contrast to standard stochastic gradient optimization schemes, CSG does not discard old gradient samples from previous iterations. Instead, design dependent integration weights are calculated to form a convex combination as an approximation to the true gradient at the current design. As the approximation error vanishes in the course of the iterations, CSG represents a hybrid approach, starting off like a purely stochastic method and behaving like a full gradient scheme in the limit. In this work, the efficiency of CSG is demonstrated for practically relevant applications from topology optimization. These settings are characterized by both, a large number of optimization variables <i>and</i> an objective function, whose evaluation requires the numerical computation of multiple integrals concatenated in a nonlinear fashion. Such problems could not be solved by any existing optimization method before. Lastly, with regards to convergence rates, first estimates are provided and confirmed with the help of numerical experiments.\u0000</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"62 3","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138513594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The continuous stochastic gradient method: part I–convergence theory 连续随机梯度法:第一部分收敛理论
IF 2.2 2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2023-11-23 DOI: 10.1007/s10589-023-00542-8
Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein

In this contribution, we present a full overview of the continuous stochastic gradient (CSG) method, including convergence results, step size rules and algorithmic insights. We consider optimization problems in which the objective function requires some form of integration, e.g., expected values. Since approximating the integration by a fixed quadrature rule can introduce artificial local solutions into the problem while simultaneously raising the computational effort, stochastic optimization schemes have become increasingly popular in such contexts. However, known stochastic gradient type methods are typically limited to expected risk functions and inherently require many iterations. The latter is particularly problematic, if the evaluation of the cost function involves solving multiple state equations, given, e.g., in form of partial differential equations. To overcome these drawbacks, a recent article introduced the CSG method, which reuses old gradient sample information via the calculation of design dependent integration weights to obtain a better approximation to the full gradient. While in the original CSG paper convergence of a subsequence was established for a diminishing step size, here, we provide a complete convergence analysis of CSG for constant step sizes and an Armijo-type line search. Moreover, new methods to obtain the integration weights are presented, extending the application range of CSG to problems involving higher dimensional integrals and distributed data.

在这篇文章中,我们全面概述了连续随机梯度(CSG)方法,包括收敛结果、步长规则和算法见解。我们考虑目标函数需要某种形式的积分的优化问题,例如期望值。由于用固定的积分规则逼近积分可以在增加计算量的同时引入人为的局部解,因此随机优化方案在这种情况下越来越受欢迎。然而,已知的随机梯度型方法通常局限于预期的风险函数,并且固有地需要多次迭代。后者尤其成问题,如果成本函数的评估涉及求解多个状态方程,例如以偏微分方程的形式给出。为了克服这些缺点,最近的一篇文章介绍了CSG方法,该方法通过计算设计相关的积分权重来重用旧的梯度样本信息,以获得更好的近似全梯度。虽然在最初的CSG论文中建立了子序列的收敛性,但在这里,我们提供了恒定步长和armijo型线搜索的CSG的完整收敛分析。此外,还提出了新的积分权值获取方法,将CSG的应用范围扩大到涉及高维积分和分布式数据的问题。
{"title":"The continuous stochastic gradient method: part I–convergence theory","authors":"Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein","doi":"10.1007/s10589-023-00542-8","DOIUrl":"https://doi.org/10.1007/s10589-023-00542-8","url":null,"abstract":"<p>In this contribution, we present a full overview of the <i>continuous stochastic gradient</i> (CSG) method, including convergence results, step size rules and algorithmic insights. We consider optimization problems in which the objective function requires some form of integration, e.g., expected values. Since approximating the integration by a fixed quadrature rule can introduce artificial local solutions into the problem while simultaneously raising the computational effort, stochastic optimization schemes have become increasingly popular in such contexts. However, known stochastic gradient type methods are typically limited to expected risk functions and inherently require many iterations. The latter is particularly problematic, if the evaluation of the cost function involves solving multiple state equations, given, e.g., in form of partial differential equations. To overcome these drawbacks, a recent article introduced the CSG method, which reuses old gradient sample information via the calculation of design dependent integration weights to obtain a better approximation to the full gradient. While in the original CSG paper convergence of a subsequence was established for a diminishing step size, here, we provide a complete convergence analysis of CSG for constant step sizes and an Armijo-type line search. Moreover, new methods to obtain the integration weights are presented, extending the application range of CSG to problems involving higher dimensional integrals and distributed data.</p>","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"62 2","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138513595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Preface to Asen L. Dontchev Memorial Special Issue 阿森·l·顿切夫纪念特刊前言
2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2023-11-03 DOI: 10.1007/s10589-023-00537-5
William W. Hager, R. Tyrrell Rockafellar, Vladimir M. Veliov
{"title":"Preface to Asen L. Dontchev Memorial Special Issue","authors":"William W. Hager, R. Tyrrell Rockafellar, Vladimir M. Veliov","doi":"10.1007/s10589-023-00537-5","DOIUrl":"https://doi.org/10.1007/s10589-023-00537-5","url":null,"abstract":"","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"14 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135867996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COAP 2022 Best Paper Prize COAP 2022最佳论文奖
2区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2023-10-30 DOI: 10.1007/s10589-023-00538-4
{"title":"COAP 2022 Best Paper Prize","authors":"","doi":"10.1007/s10589-023-00538-4","DOIUrl":"https://doi.org/10.1007/s10589-023-00538-4","url":null,"abstract":"","PeriodicalId":55227,"journal":{"name":"Computational Optimization and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136104847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Optimization and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1