首页 > 最新文献

Optimization Methods and Software最新文献

英文 中文
Feasible Newton methods for symmetric tensor Z-eigenvalue problems 对称张量z特征值问题的可行牛顿方法
Pub Date : 2022-11-14 DOI: 10.1080/10556788.2022.2142586
Jiefeng Xu, Donghui Li, Xueli Bai
Finding a Z-eigenpair of a symmetric tensor is equivalent to finding a Karush–Kuhn–Tucker point of a sphere constrained minimization problem. Based on this equivalency, in this paper, we first propose a class of iterative methods to get a Z-eigenpair of a symmetric tensor. Each method can generate a sequence of feasible points such that the sequence of function evaluations is decreasing. These methods can be regarded as extensions of the descent methods for unconstrained optimization problems. We pay particular attention to the Newton method. We show that under appropriate conditions, the Newton method is globally and quadratically convergent. Moreover, after finitely many iterations, the unit steplength will always be accepted. We also propose a nonlinear equations-based Newton method and establish its global and quadratic convergence. In the end, we do several numerical experiments to test the proposed Newton methods. The results show that both Newton methods are very efficient.
寻找对称张量的z特征对等价于寻找球面约束最小化问题的Karush-Kuhn-Tucker点。基于这一等价性,本文首先提出了一类求对称张量的z特征对的迭代方法。每种方法都能产生一系列可行点,使得函数求值的顺序递减。这些方法可以看作是无约束优化问题下降方法的扩展。我们特别注意牛顿法。我们证明了在适当的条件下,牛顿方法是全局和二次收敛的。而且,经过有限次迭代后,单位步长总是可以接受的。提出了一种基于非线性方程的牛顿方法,并证明了其全局收敛性和二次收敛性。最后,我们做了几个数值实验来验证所提出的牛顿方法。结果表明,这两种牛顿方法都是非常有效的。
{"title":"Feasible Newton methods for symmetric tensor Z-eigenvalue problems","authors":"Jiefeng Xu, Donghui Li, Xueli Bai","doi":"10.1080/10556788.2022.2142586","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142586","url":null,"abstract":"Finding a Z-eigenpair of a symmetric tensor is equivalent to finding a Karush–Kuhn–Tucker point of a sphere constrained minimization problem. Based on this equivalency, in this paper, we first propose a class of iterative methods to get a Z-eigenpair of a symmetric tensor. Each method can generate a sequence of feasible points such that the sequence of function evaluations is decreasing. These methods can be regarded as extensions of the descent methods for unconstrained optimization problems. We pay particular attention to the Newton method. We show that under appropriate conditions, the Newton method is globally and quadratically convergent. Moreover, after finitely many iterations, the unit steplength will always be accepted. We also propose a nonlinear equations-based Newton method and establish its global and quadratic convergence. In the end, we do several numerical experiments to test the proposed Newton methods. The results show that both Newton methods are very efficient.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116906531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Nonconvex equilibrium models for energy markets: exploiting price information to determine the existence of an equilibrium 能源市场的非凸均衡模型:利用价格信息来确定均衡的存在
Pub Date : 2022-11-11 DOI: 10.1080/10556788.2022.2117358
Julia Grübel, Olivier Huber, Lukas Hümbs, Max Klimm, Martin Schmidt, Alexandra Schwartz
Motivated by examples from the energy sector, we consider market equilibrium problems (MEPs) involving players with nonconvex strategy spaces or objective functions, where the latter are assumed to be linear in market prices. We propose an algorithm that determines if an equilibrium of such an MEP exists and that computes an equilibrium in case of existence. Three key prerequisites have to be met. First, appropriate bounds on market prices have to be derived from necessary optimality conditions of some players. Second, a technical assumption is required for those prices that are not uniquely determined by the derived bounds. Third, nonconvex optimization problems have to be solved to global optimality. We test the algorithm on well-known instances from the power and gas literature that meet these three prerequisites. There, nonconvexities arise from considering the transmission system operator as an additional player besides producers and consumers who, e.g. switches lines or faces nonlinear physical laws. Our numerical results indicate that equilibria often exist, especially for the case of continuous nonconvexities in the context of gas market problems.
受能源部门例子的启发,我们考虑市场均衡问题(MEPs),涉及具有非凸策略空间或目标函数的参与者,其中后者被假设为市场价格的线性。我们提出了一种算法来确定这样一个MEP的平衡是否存在,并在存在的情况下计算平衡。必须满足三个关键的先决条件。首先,市场价格的适当界限必须从某些参与者的必要最优性条件中推导出来。其次,对于那些不是唯一由推导出的边界决定的价格,需要一个技术假设。第三,非凸优化问题必须解决到全局最优。我们在功率和气体文献中满足这三个先决条件的知名实例上测试了该算法。在这种情况下,如果将输电系统运营商视为生产者和消费者之外的额外参与者,例如切换线路或面临非线性物理定律,则会产生非凸性。我们的数值结果表明,平衡点经常存在,特别是在天然气市场问题的连续非凸情况下。
{"title":"Nonconvex equilibrium models for energy markets: exploiting price information to determine the existence of an equilibrium","authors":"Julia Grübel, Olivier Huber, Lukas Hümbs, Max Klimm, Martin Schmidt, Alexandra Schwartz","doi":"10.1080/10556788.2022.2117358","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117358","url":null,"abstract":"Motivated by examples from the energy sector, we consider market equilibrium problems (MEPs) involving players with nonconvex strategy spaces or objective functions, where the latter are assumed to be linear in market prices. We propose an algorithm that determines if an equilibrium of such an MEP exists and that computes an equilibrium in case of existence. Three key prerequisites have to be met. First, appropriate bounds on market prices have to be derived from necessary optimality conditions of some players. Second, a technical assumption is required for those prices that are not uniquely determined by the derived bounds. Third, nonconvex optimization problems have to be solved to global optimality. We test the algorithm on well-known instances from the power and gas literature that meet these three prerequisites. There, nonconvexities arise from considering the transmission system operator as an additional player besides producers and consumers who, e.g. switches lines or faces nonlinear physical laws. Our numerical results indicate that equilibria often exist, especially for the case of continuous nonconvexities in the context of gas market problems.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133639940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An approximate Newton-type proximal method using symmetric rank-one updating formula for minimizing the nonsmooth composite functions 利用对称秩一更新公式求非光滑复合函数极小化的近似牛顿型近端方法
Pub Date : 2022-11-11 DOI: 10.1080/10556788.2022.2142587
Z. Aminifard, S. Babaie-Kafaki
Founded upon the scaled memoryless symmetric rank-one updating formula, we propose an approximation of the Newton-type proximal strategy for minimizing the nonsmooth composite functions. More exactly, to approximate the inverse Hessian of the smooth part of the objective function, a symmetric rank-one matrix is employed to straightly compute the search directions for a special category of well-known functions. Convergence of the given algorithm is argued with a nonmonotone backtracking line search adjusted for the corresponding nonsmooth model. Also, its practical advantages are computationally depicted in the two well-known real-world models.
基于缩放的无记忆对称1级更新公式,提出了最小化非光滑复合函数的近似牛顿型近端策略。更精确地说,为了逼近目标函数光滑部分的逆Hessian,采用对称的秩一矩阵直接计算一类已知函数的搜索方向。通过对相应的非光滑模型进行调整的非单调回溯线搜索,论证了该算法的收敛性。此外,它的实际优势在两个众所周知的现实世界模型中被计算描述。
{"title":"An approximate Newton-type proximal method using symmetric rank-one updating formula for minimizing the nonsmooth composite functions","authors":"Z. Aminifard, S. Babaie-Kafaki","doi":"10.1080/10556788.2022.2142587","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142587","url":null,"abstract":"Founded upon the scaled memoryless symmetric rank-one updating formula, we propose an approximation of the Newton-type proximal strategy for minimizing the nonsmooth composite functions. More exactly, to approximate the inverse Hessian of the smooth part of the objective function, a symmetric rank-one matrix is employed to straightly compute the search directions for a special category of well-known functions. Convergence of the given algorithm is argued with a nonmonotone backtracking line search adjusted for the corresponding nonsmooth model. Also, its practical advantages are computationally depicted in the two well-known real-world models.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124573520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection 稀疏组合选择的非凸正则化和加速梯度算法
Pub Date : 2022-11-10 DOI: 10.1080/10556788.2022.2142580
Qian Li, Wei Zhang, Guoqiang Wang, Yanqin Bai
In portfolio optimization, non-convex regularization has recently been recognized as an important approach to promote sparsity, while countervailing the shortcomings of convex penalty. In this paper, we customize the non-convex piecewise quadratic approximation (PQA) function considering the background of portfolio management and present the PQA regularized mean–variance model (PMV). By exposing the feature of PMV, we prove that a KKT point of PMV is a local minimizer if the regularization parameter satisfies a mild condition. Besides, the theoretical sparsity of PMV is analysed, which is associated with the regularization parameter and the weight parameter. To solve this model, we introduce the accelerated proximal gradient (APG) algorithm, whose improved linear convergence rate compared with proximal gradient (PG) algorithm is developed. Moreover, the optimal accelerated parameter of APG algorithm for PMV is attained. These theoretical results are further illustrated with numerical experiments. Finally, empirical analysis demonstrates that the proposed model has a better out-of-sample performance and a lower turnover than many other existing models on the tested datasets.
在投资组合优化中,非凸正则化被认为是一种重要的提高稀疏性的方法,同时也弥补了凸惩罚的缺点。本文考虑到投资组合管理的背景,对非凸分段二次逼近(PQA)函数进行了定制,提出了PQA正则化均值方差模型(PMV)。通过揭示PMV的特征,证明了当正则化参数满足温和条件时,PMV的KKT点是局部最小值。此外,还分析了PMV的理论稀疏度与正则化参数和权参数的关系。为了求解该模型,我们引入了加速近端梯度(APG)算法,与近端梯度(PG)算法相比,APG算法提高了线性收敛速度。此外,还得到了PMV的APG算法的最优加速参数。数值实验进一步说明了这些理论结果。最后,实证分析表明,该模型在测试数据集上具有更好的样本外性能和更低的周转率。
{"title":"Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection","authors":"Qian Li, Wei Zhang, Guoqiang Wang, Yanqin Bai","doi":"10.1080/10556788.2022.2142580","DOIUrl":"https://doi.org/10.1080/10556788.2022.2142580","url":null,"abstract":"In portfolio optimization, non-convex regularization has recently been recognized as an important approach to promote sparsity, while countervailing the shortcomings of convex penalty. In this paper, we customize the non-convex piecewise quadratic approximation (PQA) function considering the background of portfolio management and present the PQA regularized mean–variance model (PMV). By exposing the feature of PMV, we prove that a KKT point of PMV is a local minimizer if the regularization parameter satisfies a mild condition. Besides, the theoretical sparsity of PMV is analysed, which is associated with the regularization parameter and the weight parameter. To solve this model, we introduce the accelerated proximal gradient (APG) algorithm, whose improved linear convergence rate compared with proximal gradient (PG) algorithm is developed. Moreover, the optimal accelerated parameter of APG algorithm for PMV is attained. These theoretical results are further illustrated with numerical experiments. Finally, empirical analysis demonstrates that the proposed model has a better out-of-sample performance and a lower turnover than many other existing models on the tested datasets.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129029857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse convex optimization toolkit: a mixed-integer framework 稀疏凸优化工具箱:一个混合整数框架
Pub Date : 2022-10-30 DOI: 10.1080/10556788.2023.2222429
A. Olama, E. Camponogara, Jan Kronqvist
This paper proposes an open-source distributed solver for solving Sparse Convex Optimization (SCO) problems over computational networks. Motivated by past algorithmic advances in mixed-integer optimization, the Sparse Convex Optimization Toolkit (SCOT) adopts a mixed-integer approach to find exact solutions to SCO problems. In particular, SCOT brings together various techniques to transform the original SCO problem into an equivalent convex Mixed-Integer Nonlinear Programming (MINLP) problem that can benefit from high-performance and parallel computing platforms. To solve the equivalent mixed-integer problem, we present the Distributed Hybrid Outer Approximation (DiHOA) algorithm that builds upon the LP/NLP based branch-and-bound and is tailored for this specific problem structure. The DiHOA algorithm combines the so-called single- and multi-tree outer approximation, naturally integrates a decentralized algorithm for distributed convex nonlinear subproblems, and utilizes enhancement techniques such as quadratic cuts. Finally, we present detailed computational experiments that show the benefit of our solver through numerical benchmarks on 140 SCO problems with distributed datasets. To show the overall efficiency of SCOT we also provide performance profiles comparing SCOT to other state-of-the-art MINLP solvers.
本文提出了一个开源的分布式求解器,用于求解计算网络上的稀疏凸优化问题。受过去混合整数优化算法进展的启发,稀疏凸优化工具包(SCOT)采用混合整数方法来寻找SCO问题的精确解。特别是,SCOT汇集了各种技术,将原始SCO问题转换为等效凸混合整数非线性规划(MINLP)问题,可以受益于高性能并行计算平台。为了解决等效的混合整数问题,我们提出了基于LP/NLP的分支定界的分布式混合外近似(DiHOA)算法,并针对这种特定的问题结构进行了定制。DiHOA算法结合了所谓的单树和多树外逼近,自然地集成了分布式凸非线性子问题的分散算法,并利用二次切割等增强技术。最后,我们提出了详细的计算实验,通过140个具有分布式数据集的SCO问题的数值基准显示了我们的求解器的好处。为了展示SCOT的整体效率,我们还提供了将SCOT与其他最先进的MINLP求解器进行比较的性能概况。
{"title":"Sparse convex optimization toolkit: a mixed-integer framework","authors":"A. Olama, E. Camponogara, Jan Kronqvist","doi":"10.1080/10556788.2023.2222429","DOIUrl":"https://doi.org/10.1080/10556788.2023.2222429","url":null,"abstract":"This paper proposes an open-source distributed solver for solving Sparse Convex Optimization (SCO) problems over computational networks. Motivated by past algorithmic advances in mixed-integer optimization, the Sparse Convex Optimization Toolkit (SCOT) adopts a mixed-integer approach to find exact solutions to SCO problems. In particular, SCOT brings together various techniques to transform the original SCO problem into an equivalent convex Mixed-Integer Nonlinear Programming (MINLP) problem that can benefit from high-performance and parallel computing platforms. To solve the equivalent mixed-integer problem, we present the Distributed Hybrid Outer Approximation (DiHOA) algorithm that builds upon the LP/NLP based branch-and-bound and is tailored for this specific problem structure. The DiHOA algorithm combines the so-called single- and multi-tree outer approximation, naturally integrates a decentralized algorithm for distributed convex nonlinear subproblems, and utilizes enhancement techniques such as quadratic cuts. Finally, we present detailed computational experiments that show the benefit of our solver through numerical benchmarks on 140 SCO problems with distributed datasets. To show the overall efficiency of SCOT we also provide performance profiles comparing SCOT to other state-of-the-art MINLP solvers.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115584004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear programming with nonparametric penalty programs and iterated thresholding 非参数惩罚规划与迭代阈值的线性规划
Pub Date : 2022-10-13 DOI: 10.1080/10556788.2022.2117356
Jeffery Kline, Glenn M. Fung
It is known [Mangasarian, A Newton method for linear programming, J. Optim. Theory Appl. 121 (2004), pp. 1–18] that every linear program can be solved exactly by minimizing an unconstrained quadratic penalty program. The penalty program is parameterized by a scalar t>0, and one is able to solve the original linear program in this manner when t is selected larger than a finite, but unknown . In this paper, we show that every linear program can be solved using the solution to a parameter-free penalty program. We also characterize the solutions to the quadratic penalty programs using fixed points of certain nonexpansive maps. This leads to an iterative thresholding algorithm that converges to a desired limit point. We show in numerical experiments that this iterative method can outperform a variety of standard quadratic program solvers. Finally, we show that for every , the solution one obtains by solving a parameterized penalty program is guaranteed to lie in the feasible set of the original linear program.
[A Newton method for linear programming, J. Optim]。理论应用,121 (2004),pp. 1-18],每个线性规划都可以通过最小化无约束二次惩罚规划来精确求解。惩罚规划用标量t>0参数化,当选择t大于一个有限但未知的值时,可以用这种方法求解原线性规划。本文证明了每一个线性规划都可以用无参数惩罚规划的解来求解。我们还利用非膨胀映射的不动点对二次惩罚规划的解进行了刻画。这导致迭代阈值算法收敛到期望的极限点。数值实验表明,这种迭代方法优于各种标准的二次规划求解方法。最后,我们证明了对于每一个,通过求解参数化惩罚规划得到的解都保证在原线性规划的可行集中。
{"title":"Linear programming with nonparametric penalty programs and iterated thresholding","authors":"Jeffery Kline, Glenn M. Fung","doi":"10.1080/10556788.2022.2117356","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117356","url":null,"abstract":"It is known [Mangasarian, A Newton method for linear programming, J. Optim. Theory Appl. 121 (2004), pp. 1–18] that every linear program can be solved exactly by minimizing an unconstrained quadratic penalty program. The penalty program is parameterized by a scalar t>0, and one is able to solve the original linear program in this manner when t is selected larger than a finite, but unknown . In this paper, we show that every linear program can be solved using the solution to a parameter-free penalty program. We also characterize the solutions to the quadratic penalty programs using fixed points of certain nonexpansive maps. This leads to an iterative thresholding algorithm that converges to a desired limit point. We show in numerical experiments that this iterative method can outperform a variety of standard quadratic program solvers. Finally, we show that for every , the solution one obtains by solving a parameterized penalty program is guaranteed to lie in the feasible set of the original linear program.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of a projection and rescaling algorithm for second-order conic feasibility problems 二阶经济可行性问题的投影和缩放算法的实现
Pub Date : 2022-10-06 DOI: 10.1080/10556788.2022.2119234
Javier F. Pena, Negar Soheili
This paper documents a computational implementation of a projection and rescaling algorithm for solving one of the alternative feasibility problems where L is a linear subspace in , is its orthogonal complement, and is the interior of a direct product of second order cones. The gist of the projection and rescaling algorithm is to enhance a low-cost first-order method (a basic procedure) with an adaptive reconditioning transformation (a rescaling step). We give a full description of a Python implementation of this algorithm and present multiple sets of numerical experiments on synthetic problem instances with varied levels of conditioning. Our computational experiments provide promising evidence of the effectiveness of the projection and rescaling algorithm. Our Python code is publicly available. Furthermore, the simplicity of the algorithm makes a computational implementation in other environments completely straightforward.
本文给出了一种投影和缩放算法的计算实现,用于解决一个备选可行性问题,其中L是线性子空间,是它的正交补,是二阶锥的直积的内部。投影和缩放算法的核心是利用自适应重构变换(缩放步骤)对低成本的一阶方法(基本步骤)进行改进。我们给出了该算法的Python实现的完整描述,并在具有不同水平条件的合成问题实例上给出了多组数值实验。我们的计算实验为投影和重新缩放算法的有效性提供了有希望的证据。我们的Python代码是公开的。此外,该算法的简单性使得在其他环境中的计算实现完全简单。
{"title":"Implementation of a projection and rescaling algorithm for second-order conic feasibility problems","authors":"Javier F. Pena, Negar Soheili","doi":"10.1080/10556788.2022.2119234","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119234","url":null,"abstract":"This paper documents a computational implementation of a projection and rescaling algorithm for solving one of the alternative feasibility problems where L is a linear subspace in , is its orthogonal complement, and is the interior of a direct product of second order cones. The gist of the projection and rescaling algorithm is to enhance a low-cost first-order method (a basic procedure) with an adaptive reconditioning transformation (a rescaling step). We give a full description of a Python implementation of this algorithm and present multiple sets of numerical experiments on synthetic problem instances with varied levels of conditioning. Our computational experiments provide promising evidence of the effectiveness of the projection and rescaling algorithm. Our Python code is publicly available. Furthermore, the simplicity of the algorithm makes a computational implementation in other environments completely straightforward.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123302875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On minty variational principle for quasidifferentiable vector optimization problems 拟可微向量优化问题的极小变分原理
Pub Date : 2022-09-30 DOI: 10.1080/10556788.2022.2119235
H. Singh, Vivek Laha
This paper deals with quasidifferentiable vector optimization problems involving invex functions wrt convex compact sets. We present vector variational-like inequalities of Minty type and of Stampacchia type in terms of quasidifferentials denoted by (QMVVLI) and (QSVVLI), respectively. By utilizing these variational inequalities, we infer vital and adequate optimality conditions for an efficient solution of the quasidifferentiable vector optimization problem involving invex functions wrt convex compact sets. We also establish various results for the solutions of the corresponding weak versions of the vector variational-like inequalities in terms of quasidifferentials.
研究凸紧集上含凸函数的拟可微向量优化问题。我们分别用(QMVVLI)和(QSVVLI)表示了Minty型和Stampacchia型的类向量变分不等式。利用这些变分不等式,我们推导出凸紧集上含凸函数的拟可微向量优化问题有效解的重要且充分的最优性条件。我们还建立了拟微分形式的向量类变分不等式的相应弱版本的解的各种结果。
{"title":"On minty variational principle for quasidifferentiable vector optimization problems","authors":"H. Singh, Vivek Laha","doi":"10.1080/10556788.2022.2119235","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119235","url":null,"abstract":"This paper deals with quasidifferentiable vector optimization problems involving invex functions wrt convex compact sets. We present vector variational-like inequalities of Minty type and of Stampacchia type in terms of quasidifferentials denoted by (QMVVLI) and (QSVVLI), respectively. By utilizing these variational inequalities, we infer vital and adequate optimality conditions for an efficient solution of the quasidifferentiable vector optimization problem involving invex functions wrt convex compact sets. We also establish various results for the solutions of the corresponding weak versions of the vector variational-like inequalities in terms of quasidifferentials.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122983922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HyKKT: a hybrid direct-iterative method for solving KKT linear systems HyKKT:求解KKT线性系统的混合直接迭代法
Pub Date : 2022-09-27 DOI: 10.1080/10556788.2022.2124990
Shaked Regev, Nai-yuan Chiang, Eric F Darve, C. Petra, M. Saunders, K. Swirydowicz, Slaven Pelevs
We propose a solution strategy for the large indefinite linear systems arising in interior methods for nonlinear optimization. The method is suitable for implementation on hardware accelerators such as graphical processing units (GPUs). The current gold standard for sparse indefinite systems is the LBLT factorization where is a lower triangular matrix and is or block diagonal. However, this requires pivoting, which substantially increases communication cost and degrades performance on GPUs. Our approach solves a large indefinite system by solving multiple smaller positive definite systems, using an iterative solver on the Schur complement and an inner direct solve (via Cholesky factorization) within each iteration. Cholesky is stable without pivoting, thereby reducing communication and allowing reuse of the symbolic factorization. We demonstrate the practicality of our approach on large optimal power flow problems and show that it can efficiently utilize GPUs and outperform LBLT factorization of the full system.
针对非线性优化的内部方法中出现的大型不确定线性系统,提出了一种求解策略。该方法适合在图形处理单元(gpu)等硬件加速器上实现。目前稀疏不定系统的金标准是LBLT分解,其中是一个下三角矩阵,是或块对角线。然而,这需要旋转,这大大增加了通信成本并降低了gpu的性能。我们的方法通过求解多个较小的正定系统来解决一个大的不确定系统,在每次迭代中使用舒尔补上的迭代求解器和内部直接求解(通过Cholesky分解)。Cholesky在没有旋转的情况下是稳定的,从而减少了通信并允许符号分解的重用。我们证明了我们的方法在大型最优潮流问题上的实用性,并表明它可以有效地利用gpu并优于整个系统的LBLT分解。
{"title":"HyKKT: a hybrid direct-iterative method for solving KKT linear systems","authors":"Shaked Regev, Nai-yuan Chiang, Eric F Darve, C. Petra, M. Saunders, K. Swirydowicz, Slaven Pelevs","doi":"10.1080/10556788.2022.2124990","DOIUrl":"https://doi.org/10.1080/10556788.2022.2124990","url":null,"abstract":"We propose a solution strategy for the large indefinite linear systems arising in interior methods for nonlinear optimization. The method is suitable for implementation on hardware accelerators such as graphical processing units (GPUs). The current gold standard for sparse indefinite systems is the LBLT factorization where is a lower triangular matrix and is or block diagonal. However, this requires pivoting, which substantially increases communication cost and degrades performance on GPUs. Our approach solves a large indefinite system by solving multiple smaller positive definite systems, using an iterative solver on the Schur complement and an inner direct solve (via Cholesky factorization) within each iteration. Cholesky is stable without pivoting, thereby reducing communication and allowing reuse of the symbolic factorization. We demonstrate the practicality of our approach on large optimal power flow problems and show that it can efficiently utilize GPUs and outperform LBLT factorization of the full system.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":" 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Stochastic distributed learning with gradient quantization and double-variance reduction 基于梯度量化和双方差约简的随机分布式学习
Pub Date : 2022-09-27 DOI: 10.1080/10556788.2022.2117355
Samuel Horváth, D. Kovalev, Konstantin Mishchenko, Peter Richtárik, S. Stich
ABSTRACT We consider distributed optimization over several devices, each sending incremental model updates to a central server. This setting is considered, for instance, in federated learning. Various schemes have been designed to compress the model updates in order to reduce the overall communication cost. However, existing methods suffer from a significant slowdown due to additional variance coming from the compression operator and as a result, only converge sublinearly. What is needed is a variance reduction technique for taming the variance introduced by compression. We propose the first methods that achieve linear convergence for arbitrary compression operators. For strongly convex functions with condition number κ, distributed among n machines with a finite-sum structure, each worker having less than m components, we also (i) give analysis for the weakly convex and the non-convex cases and (ii) verify in experiments that our novel variance reduced schemes are more efficient than the baselines. Moreover, we show theoretically that as the number of devices increases, higher compression levels are possible without this affecting the overall number of communications in comparison with methods that do not perform any compression. This leads to a significant reduction in communication cost. Our general analysis allows to pick the most suitable compression for each problem, finding the right balance between additional variance and communication savings. Finally, we also (iii) give analysis for arbitrary quantized updates.
我们考虑在多个设备上进行分布式优化,每个设备向中央服务器发送增量模型更新。例如,在联邦学习中就考虑了这种设置。为了降低整体通信成本,设计了各种压缩模型更新的方案。然而,由于来自压缩算子的额外方差,现有方法的速度明显减慢,因此只能进行次线性收敛。所需要的是一种方差减少技术,以驯服由压缩引入的方差。我们提出了第一个实现任意压缩算子线性收敛的方法。对于条件数为κ的强凸函数,分布在n台具有有限和结构的机器中,每个工人具有少于m个组件,我们还(i)给出了弱凸和非凸情况的分析,(ii)在实验中验证了我们的新方差减少方案比基线更有效。此外,我们从理论上表明,随着设备数量的增加,与不执行任何压缩的方法相比,更高的压缩级别可能不会影响通信的总数。这大大降低了通信成本。我们的一般分析允许为每个问题选择最合适的压缩,在额外方差和通信节省之间找到适当的平衡。最后,我们还(iii)给出了任意量化更新的分析。
{"title":"Stochastic distributed learning with gradient quantization and double-variance reduction","authors":"Samuel Horváth, D. Kovalev, Konstantin Mishchenko, Peter Richtárik, S. Stich","doi":"10.1080/10556788.2022.2117355","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117355","url":null,"abstract":"ABSTRACT We consider distributed optimization over several devices, each sending incremental model updates to a central server. This setting is considered, for instance, in federated learning. Various schemes have been designed to compress the model updates in order to reduce the overall communication cost. However, existing methods suffer from a significant slowdown due to additional variance coming from the compression operator and as a result, only converge sublinearly. What is needed is a variance reduction technique for taming the variance introduced by compression. We propose the first methods that achieve linear convergence for arbitrary compression operators. For strongly convex functions with condition number κ, distributed among n machines with a finite-sum structure, each worker having less than m components, we also (i) give analysis for the weakly convex and the non-convex cases and (ii) verify in experiments that our novel variance reduced schemes are more efficient than the baselines. Moreover, we show theoretically that as the number of devices increases, higher compression levels are possible without this affecting the overall number of communications in comparison with methods that do not perform any compression. This leads to a significant reduction in communication cost. Our general analysis allows to pick the most suitable compression for each problem, finding the right balance between additional variance and communication savings. Finally, we also (iii) give analysis for arbitrary quantized updates.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116264618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
Optimization Methods and Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1