首页 > 最新文献

Optimization Methods and Software最新文献

英文 中文
On the numerical performance of finite-difference-based methods for derivative-free optimization 基于有限差分的无导数优化方法的数值性能研究
Pub Date : 2022-09-26 DOI: 10.1080/10556788.2022.2121832
H. Shi, M. Xuan, Figen Öztoprak, J. Nocedal
The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. In its simplest form, it consists of employing derivative-based methods for unconstrained or constrained optimization and replacing the gradient of the objective (and constraints) by finite-difference approximations. This approach is applicable to problems with or without noise in the functions. The differencing interval is determined by a bound on the second (or third) derivative and by the noise level, which is assumed to be known or to be accessible through difference tables or sampling. The use of finite-difference gradient approximations has been largely dismissed in the derivative-free optimization literature as too expensive in terms of function evaluations or as impractical in the presence of noise. However, the test results presented in this paper suggest that it has much to be recommended. The experiments compare newuoa, dfo-ls and cobyla against finite-difference versions of l-bfgs, lmder and knitro on three classes of problems: general unconstrained problems, nonlinear least squares problems and nonlinear programs with inequality constraints.
本文的目标是研究一种在文献中没有得到足够重视的无导数优化方法,但它是最容易实现和并行化的方法之一。在其最简单的形式中,它包括采用基于导数的方法进行无约束或有约束优化,并用有限差分近似代替目标(和约束)的梯度。这种方法适用于函数中有无噪声的问题。差分区间由二阶(或三阶)导数的边界和噪声水平决定,假设噪声水平是已知的,或者可以通过差分表或采样获得。在无导数优化文献中,有限差分梯度近似的使用在很大程度上被忽视了,因为在函数评估方面过于昂贵,或者在存在噪声的情况下不切实际。然而,本文提出的试验结果表明,它有很多值得推荐的地方。在一般无约束问题、非线性最小二乘问题和具有不等式约束的非线性规划这三类问题上,将newoa、dfo-ls和cobyla与有限差分版本的l-bfgs、lder和knitro进行了比较。
{"title":"On the numerical performance of finite-difference-based methods for derivative-free optimization","authors":"H. Shi, M. Xuan, Figen Öztoprak, J. Nocedal","doi":"10.1080/10556788.2022.2121832","DOIUrl":"https://doi.org/10.1080/10556788.2022.2121832","url":null,"abstract":"The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. In its simplest form, it consists of employing derivative-based methods for unconstrained or constrained optimization and replacing the gradient of the objective (and constraints) by finite-difference approximations. This approach is applicable to problems with or without noise in the functions. The differencing interval is determined by a bound on the second (or third) derivative and by the noise level, which is assumed to be known or to be accessible through difference tables or sampling. The use of finite-difference gradient approximations has been largely dismissed in the derivative-free optimization literature as too expensive in terms of function evaluations or as impractical in the presence of noise. However, the test results presented in this paper suggest that it has much to be recommended. The experiments compare newuoa, dfo-ls and cobyla against finite-difference versions of l-bfgs, lmder and knitro on three classes of problems: general unconstrained problems, nonlinear least squares problems and nonlinear programs with inequality constraints.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128557413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates 一种新的具有快速迭代收敛速度的随机原对偶凸优化算法
Pub Date : 2022-09-26 DOI: 10.1080/10556788.2022.2119233
Quoc Tran-Dinh, Deyi Liu
We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature. We prove that our algorithm achieves and convergence rates in two cases: merely convexity and strong convexity, respectively, where k is the iteration counter and n is the number of block-coordinates. These rates are known to be optimal (up to a constant factor) when n = 1. Our convergence rates are obtained through three criteria: primal objective residual and primal feasibility violation, dual objective residual, and primal-dual expected gap. Moreover, our rates for the primal problem are on the last-iterate sequence. Our dual convergence guarantee requires additionally a Lipschitz continuity assumption. We specify our algorithm to handle two important special cases, where our rates are still applied. Finally, we verify our algorithm on two well-studied numerical examples and compare it with two existing methods. Our results show that the proposed method has encouraging performance on different experiments.
针对一类非光滑约束凸优化问题,提出了一种新的统一的随机块坐标原始对偶算法,该算法涵盖了文献中不同的现有变量和模型设置。我们证明了我们的算法分别在两种情况下达到和收敛速度:单纯凸性和强凸性,其中k是迭代计数器,n是块坐标数。当n = 1时,这些速率已知是最优的(直到一个常数因子)。我们的收敛速度通过三个准则得到:原始目标残差和原始可行性违反,对偶目标残差和原始-对偶期望差。此外,我们的原始问题的速率是在最后迭代序列上。我们的对偶收敛保证还需要一个Lipschitz连续性假设。我们指定我们的算法来处理两种重要的特殊情况,其中我们的速率仍然适用。最后,通过两个研究充分的数值算例验证了算法的正确性,并与现有的两种方法进行了比较。实验结果表明,该方法在不同的实验中都取得了令人满意的效果。
{"title":"A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates","authors":"Quoc Tran-Dinh, Deyi Liu","doi":"10.1080/10556788.2022.2119233","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119233","url":null,"abstract":"We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature. We prove that our algorithm achieves and convergence rates in two cases: merely convexity and strong convexity, respectively, where k is the iteration counter and n is the number of block-coordinates. These rates are known to be optimal (up to a constant factor) when n = 1. Our convergence rates are obtained through three criteria: primal objective residual and primal feasibility violation, dual objective residual, and primal-dual expected gap. Moreover, our rates for the primal problem are on the last-iterate sequence. Our dual convergence guarantee requires additionally a Lipschitz continuity assumption. We specify our algorithm to handle two important special cases, where our rates are still applied. Finally, we verify our algorithm on two well-studied numerical examples and compare it with two existing methods. Our results show that the proposed method has encouraging performance on different experiments.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129239659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
New iterative algorithms with self-adaptive step size for solving split equality fixed point problem and its applications 解分裂等式不动点问题的自适应步长迭代算法及其应用
Pub Date : 2022-09-09 DOI: 10.1080/10556788.2022.2117357
Yan Tang, Haiyun Zhou
The purpose of this paper is to propose a new alternative step size algorithm without using projections and without prior knowledge of operator norms to the split equality fixed point problem for a class of quasi-pseudo-contractive mappings. Under appropriate conditions, weak and strong convergence theorems for the presented algorithms are obtained, respectively. Furthermore, the algorithm proposed in this paper is also applied to approximate the solution of the split equality equilibrium and split equality inclusion problems.
摘要针对一类拟拟压缩映射的分裂等式不动点问题,提出了一种不使用投影和算子范数先验知识的步长替代算法。在适当的条件下,分别得到了算法的弱收敛定理和强收敛定理。此外,本文提出的算法还应用于分裂等式平衡问题和分裂等式包含问题的近似解。
{"title":"New iterative algorithms with self-adaptive step size for solving split equality fixed point problem and its applications","authors":"Yan Tang, Haiyun Zhou","doi":"10.1080/10556788.2022.2117357","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117357","url":null,"abstract":"The purpose of this paper is to propose a new alternative step size algorithm without using projections and without prior knowledge of operator norms to the split equality fixed point problem for a class of quasi-pseudo-contractive mappings. Under appropriate conditions, weak and strong convergence theorems for the presented algorithms are obtained, respectively. Furthermore, the algorithm proposed in this paper is also applied to approximate the solution of the split equality equilibrium and split equality inclusion problems.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115920672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical perspectives on symplectic accelerated optimization 辛加速优化的实用观点
Pub Date : 2022-07-23 DOI: 10.1080/10556788.2023.2214837
Valentin Duruisseaux, M. Leok
Geometric numerical integration has recently been exploited to design symplectic accelerated optimization algorithms by simulating the Lagrangian and Hamiltonian systems from the variational framework introduced in Wibisono et al. In this paper, we discuss practical considerations which can significantly boost the computational performance of these optimization algorithms, and considerably simplify the tuning process. In particular, we investigate how momentum restarting schemes ameliorate computational efficiency and robustness by reducing the undesirable effect of oscillations, and ease the tuning process by making time-adaptivity superfluous. We also discuss how temporal looping helps avoiding instability issues caused by numerical precision, without harming the computational efficiency of the algorithms. Finally, we compare the efficiency and robustness of different geometric integration techniques, and study the effects of the different parameters in the algorithms to inform and simplify tuning in practice. From this paper emerge symplectic accelerated optimization algorithms whose computational efficiency, stability and robustness have been improved, and which are now much simpler to use and tune for practical applications.
几何数值积分最近被用于设计辛加速优化算法,通过从Wibisono等人引入的变分框架中模拟拉格朗日和哈密顿系统。在本文中,我们讨论了可以显著提高这些优化算法的计算性能的实际考虑因素,并大大简化了调优过程。特别是,我们研究了动量重新启动方案如何通过减少振荡的不良影响来改善计算效率和鲁棒性,并通过使时间自适应变得多余来简化调谐过程。我们还讨论了时间循环如何帮助避免由数值精度引起的不稳定问题,而不损害算法的计算效率。最后,我们比较了不同几何积分技术的效率和鲁棒性,并研究了算法中不同参数的影响,以便在实践中提供信息和简化调优。本文提出了辛加速优化算法,其计算效率、稳定性和鲁棒性都得到了提高,并且在实际应用中使用和调优更加简单。
{"title":"Practical perspectives on symplectic accelerated optimization","authors":"Valentin Duruisseaux, M. Leok","doi":"10.1080/10556788.2023.2214837","DOIUrl":"https://doi.org/10.1080/10556788.2023.2214837","url":null,"abstract":"Geometric numerical integration has recently been exploited to design symplectic accelerated optimization algorithms by simulating the Lagrangian and Hamiltonian systems from the variational framework introduced in Wibisono et al. In this paper, we discuss practical considerations which can significantly boost the computational performance of these optimization algorithms, and considerably simplify the tuning process. In particular, we investigate how momentum restarting schemes ameliorate computational efficiency and robustness by reducing the undesirable effect of oscillations, and ease the tuning process by making time-adaptivity superfluous. We also discuss how temporal looping helps avoiding instability issues caused by numerical precision, without harming the computational efficiency of the algorithms. Finally, we compare the efficiency and robustness of different geometric integration techniques, and study the effects of the different parameters in the algorithms to inform and simplify tuning in practice. From this paper emerge symplectic accelerated optimization algorithms whose computational efficiency, stability and robustness have been improved, and which are now much simpler to use and tune for practical applications.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exact gradient methods with memory 具有记忆的精确梯度方法
Pub Date : 2022-07-20 DOI: 10.1080/10556788.2022.2091559
Mihai I. Florea
ABSTRACT The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.
具有记忆的不精确梯度方法(IGMM)通过在目标的光滑部分采用分段线性下模型,大大优于梯度方法。然而,辅助问题在每次迭代中只能在一个固定的公差范围内得到解决。控制不精确性的需要缩小了IGMM可以应用的问题范围,降低了最坏情况下的收敛速度。在这项工作中,我们展示了对IGMM的简单修改如何从分析中删除公差参数。所得到的具有记忆的精确梯度法(EGMM)与Bregman距离梯度法/NoLips一样广泛适用,并且具有相同的最坏情况率,在同类中是最好的。在必要的更严格的假设下,我们可以在没有误差积累的情况下加速EGMM,从而得到具有最坏情况率的记忆加速梯度法(AGMM)。在我们的初步计算实验中,EGMM显示出优异的性能,有时甚至超过了加速方法。当模型丢弃旧信息时,AGMM也始终优于快速梯度方法。
{"title":"Exact gradient methods with memory","authors":"Mihai I. Florea","doi":"10.1080/10556788.2022.2091559","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091559","url":null,"abstract":"ABSTRACT The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115279808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sequential approximate optimization with adaptive parallel infill strategy assisted by inaccurate Pareto front 基于不准确Pareto前沿的自适应并行填充策略序列逼近优化
Pub Date : 2022-07-19 DOI: 10.1080/10556788.2022.2091560
Wenjie Wang, Pengyu Wang, Jiawei Yang, Fei Xiao, Weihua Zhang, Zeping Wu
Sequential Approximate Optimization (SAO) has been widely used in engineering optimization design problems to improve efficiency. The infilling strategy is one of the critical techniques of the SAO, which is of paramount importance to the surrogate model accuracy and optimization efficiency. In this paper, an adaptive parallel infill strategy for surrogate-based single-objective optimization is proposed within a multi-objective optimization framework to balance exploration and exploitation during the optimization process. Within this method, an inaccurate Pareto Front is adopted to assist the infilling of the sampling points. The proposed SAO method with its adaptive parallel sampling strategy is tested on several numerical test cases and an engineering test case with the optimization results compared to state-of-the-art optimization algorithms. The results show that the proposed SAO with the adaptive parallel sampling strategy possesses excellent performance and better stability.
序贯近似优化(SAO)方法已广泛应用于工程优化设计问题中,以提高效率。填充策略是SAO的关键技术之一,对代理模型的精度和优化效率至关重要。本文在多目标优化框架下,提出了一种基于代理的单目标优化自适应并行填充策略,以平衡优化过程中的勘探与开采。该方法利用不准确的Pareto Front辅助采样点的填充。基于自适应并行采样策略的SAO方法在多个数值测试例和工程测试例上进行了测试,并将优化结果与现有优化算法进行了比较。结果表明,采用自适应并行采样策略的SAO具有优异的性能和良好的稳定性。
{"title":"Sequential approximate optimization with adaptive parallel infill strategy assisted by inaccurate Pareto front","authors":"Wenjie Wang, Pengyu Wang, Jiawei Yang, Fei Xiao, Weihua Zhang, Zeping Wu","doi":"10.1080/10556788.2022.2091560","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091560","url":null,"abstract":"Sequential Approximate Optimization (SAO) has been widely used in engineering optimization design problems to improve efficiency. The infilling strategy is one of the critical techniques of the SAO, which is of paramount importance to the surrogate model accuracy and optimization efficiency. In this paper, an adaptive parallel infill strategy for surrogate-based single-objective optimization is proposed within a multi-objective optimization framework to balance exploration and exploitation during the optimization process. Within this method, an inaccurate Pareto Front is adopted to assist the infilling of the sampling points. The proposed SAO method with its adaptive parallel sampling strategy is tested on several numerical test cases and an engineering test case with the optimization results compared to state-of-the-art optimization algorithms. The results show that the proposed SAO with the adaptive parallel sampling strategy possesses excellent performance and better stability.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132471580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A stochastic approximation method for convex programming with many semidefinite constraints 多半定约束凸规划的随机逼近方法
Pub Date : 2022-07-19 DOI: 10.1080/10556788.2022.2091563
L. Pang, Ming-Kun Zhang, X. Xiao
In this paper, we consider a type of semidefinite programming problem (MSDP), which involves many (not necessarily finite) of semidefinite constraints. MSDP can be established in a wide range of applications, including covering ellipsoids problem and truss topology design. We propose a random method based on a stochastic approximation technique for solving MSDP, without calculating and storing the multiplier. Under mild conditions, we establish the almost sure convergence and expected convergence rates of the proposed method. A variety of simulation experiments are carried out to support our theoretical results.
本文考虑一类涉及许多(不一定是有限的)半确定约束的半确定规划问题。MSDP可以建立广泛的应用,包括涵盖椭球问题和桁架拓扑设计。我们提出了一种基于随机逼近技术的随机方法来求解MSDP,而不需要计算和存储乘数。在温和条件下,我们建立了该方法的几乎肯定收敛性和期望收敛率。进行了各种模拟实验来支持我们的理论结果。
{"title":"A stochastic approximation method for convex programming with many semidefinite constraints","authors":"L. Pang, Ming-Kun Zhang, X. Xiao","doi":"10.1080/10556788.2022.2091563","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091563","url":null,"abstract":"In this paper, we consider a type of semidefinite programming problem (MSDP), which involves many (not necessarily finite) of semidefinite constraints. MSDP can be established in a wide range of applications, including covering ellipsoids problem and truss topology design. We propose a random method based on a stochastic approximation technique for solving MSDP, without calculating and storing the multiplier. Under mild conditions, we establish the almost sure convergence and expected convergence rates of the proposed method. A variety of simulation experiments are carried out to support our theoretical results.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133822618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error 一类具有相对误差的非凸复合优化问题的非精确随机分裂方法
Pub Date : 2022-07-19 DOI: 10.1080/10556788.2022.2091562
Jia Hu, Congying Han, Tiande Guo, Tong Zhao
We consider minimizing a class of nonconvex composite stochastic optimization problems, and deterministic optimization problems whose objective function consists of an expectation function (or an average of finitely many smooth functions) and a weakly convex but potentially nonsmooth function. And in this paper, we focus on the theoretical properties of two types of stochastic splitting methods for solving these nonconvex optimization problems: stochastic alternating direction method of multipliers and stochastic proximal gradient descent. In particular, several inexact versions of these two types of splitting methods are studied. At each iteration, the proposed schemes inexactly solve their subproblems by using relative error criteria instead of exogenous and diminishing error rules, which allows our approaches to handle some complex regularized problems in statistics and machine learning. Under mild conditions, we obtain the convergence of the schemes and their computational complexity related to the evaluations on the component gradient of the smooth function, and find that some conclusions of their exact counterparts can be recovered.
我们考虑最小化一类非凸组合随机优化问题,以及目标函数由期望函数(或有限多个光滑函数的平均值)和弱凸但潜在非光滑函数组成的确定性优化问题。本文重点讨论了求解这些非凸优化问题的两类随机分裂方法的理论性质:随机乘法器交替方向法和随机近端梯度下降法。特别地,研究了这两种分裂方法的几种不精确版本。在每次迭代中,所提出的方案通过使用相对误差标准而不是外生和递减误差规则来不精确地解决子问题,这使得我们的方法可以处理统计和机器学习中的一些复杂的正则化问题。在较温和的条件下,我们得到了这些格式的收敛性及其与光滑函数的分量梯度计算有关的计算复杂度,并发现它们的一些精确结论可以被恢复。
{"title":"On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error","authors":"Jia Hu, Congying Han, Tiande Guo, Tong Zhao","doi":"10.1080/10556788.2022.2091562","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091562","url":null,"abstract":"We consider minimizing a class of nonconvex composite stochastic optimization problems, and deterministic optimization problems whose objective function consists of an expectation function (or an average of finitely many smooth functions) and a weakly convex but potentially nonsmooth function. And in this paper, we focus on the theoretical properties of two types of stochastic splitting methods for solving these nonconvex optimization problems: stochastic alternating direction method of multipliers and stochastic proximal gradient descent. In particular, several inexact versions of these two types of splitting methods are studied. At each iteration, the proposed schemes inexactly solve their subproblems by using relative error criteria instead of exogenous and diminishing error rules, which allows our approaches to handle some complex regularized problems in statistics and machine learning. Under mild conditions, we obtain the convergence of the schemes and their computational complexity related to the evaluations on the component gradient of the smooth function, and find that some conclusions of their exact counterparts can be recovered.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116841254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A quasi-Newton method in shape optimization for a transmission problem 传动问题形状优化的准牛顿方法
Pub Date : 2022-06-01 DOI: 10.1080/10556788.2022.2078823
Petar Kunštek, M. Vrdoljak
We consider optimal design problems in stationary diffusion for mixtures of two isotropic phases. The goal is to find an optimal distribution of the phases such that the energy functional is maximized. By following the identity perturbation method, we calculate the first- and second-order shape derivatives in the distributional representation under weak regularity assumptions. Ascent methods based on the distributed first- and second-order shape derivatives are implemented and tested in classes of problems for which the classical solutions exist and can be explicitly calculated from the optimality conditions. A proposed quasi-Newton method offers a better ascent vector compared to gradient methods, reaching the optimal design in half as many steps. The method applies well also for multiple state problems.
研究了两各向同性相混合物平稳扩散的最优设计问题。目标是找到相位的最优分布,使能量泛函最大化。采用单位摄动法,在弱正则性假设下,计算了分布表示中的一阶和二阶形状导数。基于分布一阶和二阶形状导数的上升方法在一类经典解存在且可从最优性条件显式计算的问题中得到了实现和验证。与梯度法相比,拟牛顿法提供了更好的上升向量,只需一半的步骤即可达到最优设计。该方法也适用于多状态问题。
{"title":"A quasi-Newton method in shape optimization for a transmission problem","authors":"Petar Kunštek, M. Vrdoljak","doi":"10.1080/10556788.2022.2078823","DOIUrl":"https://doi.org/10.1080/10556788.2022.2078823","url":null,"abstract":"We consider optimal design problems in stationary diffusion for mixtures of two isotropic phases. The goal is to find an optimal distribution of the phases such that the energy functional is maximized. By following the identity perturbation method, we calculate the first- and second-order shape derivatives in the distributional representation under weak regularity assumptions. Ascent methods based on the distributed first- and second-order shape derivatives are implemented and tested in classes of problems for which the classical solutions exist and can be explicitly calculated from the optimality conditions. A proposed quasi-Newton method offers a better ascent vector compared to gradient methods, reaching the optimal design in half as many steps. The method applies well also for multiple state problems.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A wide neighbourhood predictor–corrector infeasible-interior-point algorithm for symmetric cone programming 对称锥规划的宽邻域预测校正不可行内点算法
Pub Date : 2022-05-12 DOI: 10.1080/10556788.2022.2060970
M. S. Shahraki, H. Mansouri, A. Delavarkhalafi
In this paper, we propose a new predictor–corrector infeasible-interior-point algorithm for symmetric cone programming. Each iterate always follows the usual wide neighbourhood , it does not necessarily stay within it but must stay within the wider neighbourhood . We prove that, besides the predictor step, each corrector step also reduces the duality gap by a rate of , where r is the rank of the associated Euclidean Jordan algebra. Moreover, we improve the theoretical complexity bound of an infeasible-interior-point method. Some numerical results are provided as well.
针对对称锥规划问题,提出了一种新的预测-校正不可行内点算法。每次迭代总是遵循通常的宽邻域,它不一定停留在它内,但必须停留在更宽的邻域内。我们证明,除了预测步骤之外,每个校正步骤也以速率减小对偶间隙,其中r是相关欧几里德约当代数的秩。此外,我们改进了一种不可行的内点法的理论复杂度界。并给出了一些数值结果。
{"title":"A wide neighbourhood predictor–corrector infeasible-interior-point algorithm for symmetric cone programming","authors":"M. S. Shahraki, H. Mansouri, A. Delavarkhalafi","doi":"10.1080/10556788.2022.2060970","DOIUrl":"https://doi.org/10.1080/10556788.2022.2060970","url":null,"abstract":"In this paper, we propose a new predictor–corrector infeasible-interior-point algorithm for symmetric cone programming. Each iterate always follows the usual wide neighbourhood , it does not necessarily stay within it but must stay within the wider neighbourhood . We prove that, besides the predictor step, each corrector step also reduces the duality gap by a rate of , where r is the rank of the associated Euclidean Jordan algebra. Moreover, we improve the theoretical complexity bound of an infeasible-interior-point method. Some numerical results are provided as well.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123251401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Optimization Methods and Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1