SIAM Journal on Optimization, Volume 34, Issue 2, Page 1732-1754, June 2024. Abstract. We first study Clarke’s tangent cones at infinity to unbounded subsets of [math]. We prove that these cones are closed convex and show a characterization of their interiors. We then study subgradients at infinity for extended real value functions on [math] and derive necessary optimality conditions at infinity for optimization problems. We also give a number of rules for the computing of subgradients at infinity and provide some characterizations of the Lipschitz continuity at infinity for lower semicontinuous functions.
{"title":"Clarke’s Tangent Cones, Subgradients, Optimality Conditions, and the Lipschitzness at Infinity","authors":"Minh Tùng Nguyễn, Tiến-Sơn Phạm","doi":"10.1137/23m1545367","DOIUrl":"https://doi.org/10.1137/23m1545367","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1732-1754, June 2024. <br/> Abstract. We first study Clarke’s tangent cones at infinity to unbounded subsets of [math]. We prove that these cones are closed convex and show a characterization of their interiors. We then study subgradients at infinity for extended real value functions on [math] and derive necessary optimality conditions at infinity for optimization problems. We also give a number of rules for the computing of subgradients at infinity and provide some characterizations of the Lipschitz continuity at infinity for lower semicontinuous functions.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140929986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Didier Henrion, Milan Korda, Martin Kruzik, Rodolfo Rios-Zertuche
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1708-1731, June 2024. Abstract. This work addresses the occupation measure relaxation of calculus of variations problems, which is an infinite-dimensional linear programming reformulation amenable to numerical approximation by a hierarchy of semidefinite optimization problems. We address the problem of equivalence of this relaxation to the original problem. Our main result provides sufficient conditions for this equivalence. These conditions, revolving around the convexity of the data, are simple and apply in very general settings that may be of arbitrary dimensions and may include pointwise and integral constraints, thereby considerably strengthening the existing results. Our conditions are also extended to optimal control problems. In addition, we demonstrate how these results can be applied in nonconvex settings, showing that the occupation measure relaxation is at least as strong as the convexification using the convex envelope; in doing so, we prove that a certain weakening of the occupation measure relaxation is equivalent to the convex envelope. This opens the way to application of the occupation measure relaxation in situations where the convex envelope relaxation is known to be equivalent to the original problem, which includes problems in magnetism and elasticity.
{"title":"Occupation Measure Relaxations in Variational Problems: The Role of Convexity","authors":"Didier Henrion, Milan Korda, Martin Kruzik, Rodolfo Rios-Zertuche","doi":"10.1137/23m1557088","DOIUrl":"https://doi.org/10.1137/23m1557088","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1708-1731, June 2024. <br/> Abstract. This work addresses the occupation measure relaxation of calculus of variations problems, which is an infinite-dimensional linear programming reformulation amenable to numerical approximation by a hierarchy of semidefinite optimization problems. We address the problem of equivalence of this relaxation to the original problem. Our main result provides sufficient conditions for this equivalence. These conditions, revolving around the convexity of the data, are simple and apply in very general settings that may be of arbitrary dimensions and may include pointwise and integral constraints, thereby considerably strengthening the existing results. Our conditions are also extended to optimal control problems. In addition, we demonstrate how these results can be applied in nonconvex settings, showing that the occupation measure relaxation is at least as strong as the convexification using the convex envelope; in doing so, we prove that a certain weakening of the occupation measure relaxation is equivalent to the convex envelope. This opens the way to application of the occupation measure relaxation in situations where the convex envelope relaxation is known to be equivalent to the original problem, which includes problems in magnetism and elasticity.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140887343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1679-1707, June 2024. Abstract. Classical primal-dual algorithms attempt to solve [math] by alternately minimizing over the primal variable [math] through primal descent and maximizing the dual variable [math] through dual ascent. However, when [math] is highly nonconvex with complex constraints in [math], the minimization over [math] may not achieve global optimality and, hence, the dual ascent step loses its valid intuition. This observation motivates us to propose a new class of primal-dual algorithms for nonconvex constrained optimization with the key feature to reverse dual ascent to a conceptually new dual descent, in a sense, elevating the dual variable to the same status as the primal variable. Surprisingly, this new dual scheme achieves some best iteration complexities for solving nonconvex optimization problems. In particular, when the dual descent step is scaled by a fractional constant, we name it scaled dual descent (SDD), otherwise, unscaled dual descent (UDD). For nonconvex multiblock optimization with nonlinear equality constraints, we propose SDD-alternating direction method of multipliers (SDD-ADMM) and show that it finds an [math]-stationary solution in [math] iterations. The complexity is further improved to [math] and [math] under proper conditions. We also propose UDD-augmented Lagrangian method (UDD-ALM), combining UDD with ALM, for weakly convex minimization over affine constraints. We show that UDD-ALM finds an [math]-stationary solution in [math] iterations. These complexity bounds for both algorithms either achieve or improve the best-known results in the ADMM and ALM literature. Moreover, SDD-ADMM addresses a long-standing limitation of existing ADMM frameworks.
{"title":"Dual Descent Augmented Lagrangian Method and Alternating Direction Method of Multipliers","authors":"Kaizhao Sun, Xu Andy Sun","doi":"10.1137/21m1449099","DOIUrl":"https://doi.org/10.1137/21m1449099","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1679-1707, June 2024. <br/> Abstract. Classical primal-dual algorithms attempt to solve [math] by alternately minimizing over the primal variable [math] through primal descent and maximizing the dual variable [math] through dual ascent. However, when [math] is highly nonconvex with complex constraints in [math], the minimization over [math] may not achieve global optimality and, hence, the dual ascent step loses its valid intuition. This observation motivates us to propose a new class of primal-dual algorithms for nonconvex constrained optimization with the key feature to reverse dual ascent to a conceptually new dual descent, in a sense, elevating the dual variable to the same status as the primal variable. Surprisingly, this new dual scheme achieves some best iteration complexities for solving nonconvex optimization problems. In particular, when the dual descent step is scaled by a fractional constant, we name it scaled dual descent (SDD), otherwise, unscaled dual descent (UDD). For nonconvex multiblock optimization with nonlinear equality constraints, we propose SDD-alternating direction method of multipliers (SDD-ADMM) and show that it finds an [math]-stationary solution in [math] iterations. The complexity is further improved to [math] and [math] under proper conditions. We also propose UDD-augmented Lagrangian method (UDD-ALM), combining UDD with ALM, for weakly convex minimization over affine constraints. We show that UDD-ALM finds an [math]-stationary solution in [math] iterations. These complexity bounds for both algorithms either achieve or improve the best-known results in the ADMM and ALM literature. Moreover, SDD-ADMM addresses a long-standing limitation of existing ADMM frameworks.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoa T. Bui, Regina S. Burachik, Evgeni A. Nurminski, Matthew K. Tam
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1646-1678, June 2024. Abstract. We consider a class of convex optimization problems in a Hilbert space that can be solved by performing a single projection, i.e., by projecting an infeasible point onto the feasible set. Our results improve those established for the linear programming setting in Nurminski (2015) by considering problems that (i) may have multiple solutions, (ii) do not satisfy strict complementarity conditions, and (iii) possess nonlinear convex constraints. As a by-product of our analysis, we provide a quantitative estimate on the required distance between the infeasible point and the feasible set in order for its projection to be a solution of the problem. Our analysis relies on a “sharpness” property of the constraint set, a new property we introduce here.
{"title":"Single-Projection Procedure for Infinite Dimensional Convex Optimization Problems","authors":"Hoa T. Bui, Regina S. Burachik, Evgeni A. Nurminski, Matthew K. Tam","doi":"10.1137/22m1530173","DOIUrl":"https://doi.org/10.1137/22m1530173","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1646-1678, June 2024. <br/>Abstract. We consider a class of convex optimization problems in a Hilbert space that can be solved by performing a single projection, i.e., by projecting an infeasible point onto the feasible set. Our results improve those established for the linear programming setting in Nurminski (2015) by considering problems that (i) may have multiple solutions, (ii) do not satisfy strict complementarity conditions, and (iii) possess nonlinear convex constraints. As a by-product of our analysis, we provide a quantitative estimate on the required distance between the infeasible point and the feasible set in order for its projection to be a solution of the problem. Our analysis relies on a “sharpness” property of the constraint set, a new property we introduce here.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140830229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1622-1645, June 2024. Abstract. Augmented Lagrangian dual augments the classical Lagrangian dual with a nonnegative nonlinear penalty function of the violation of the relaxed/dualized constraints in order to reduce the duality gap. We investigate the cases in which mixed integer convex optimization problems have an exact penalty representation using sharp augmenting functions (norms as augmenting penalty functions). We present a generalizable constructive proof technique for proving existence of exact penalty representations for mixed integer convex programs under specific conditions using the associated value functions. This generalizes the recent results for mixed integer linear programming [M. J. Feizollahi, S. Ahmed, and A. Sun, Math. Program., 161 (2017), pp. 365–387] and mixed integer quadratic progamming [X. Gu, S. Ahmed, and S. S. Dey, SIAM J. Optim., 30 (2020), pp. 781–797] while also providing an alternative proof for the aforementioned along with quantification of the finite penalty parameter in these cases.
SIAM 优化期刊》第 34 卷第 2 期第 1622-1645 页,2024 年 6 月。摘要增量拉格朗日对偶用违反松弛/对偶约束的非负非线性惩罚函数来增量经典拉格朗日对偶,以减小对偶差距。我们研究了混合整数凸优化问题中使用尖锐增强函数(作为增强惩罚函数的规范)进行精确惩罚表示的情况。我们提出了一种可推广的构造证明技术,在特定条件下利用相关的值函数证明混合整数凸程序存在精确的惩罚表示。这概括了混合整数线性规划的最新成果 [M. J. Feizollahi, M. J. Feizollahi, M. J. M.J. Feizollahi, S. Ahmed, and A. Sun, Math.161 (2017), pp. 365-387] 和混合整数二次编程 [X. Gu, S. Ahmed, and S. S. Dey, SIAM J. Optim., 30 (2020), pp.
{"title":"Exact Augmented Lagrangian Duality for Mixed Integer Convex Optimization","authors":"Avinash Bhardwaj, Vishnu Narayanan, Abhishek Pathapati","doi":"10.1137/22m1526204","DOIUrl":"https://doi.org/10.1137/22m1526204","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1622-1645, June 2024. <br/>Abstract. Augmented Lagrangian dual augments the classical Lagrangian dual with a nonnegative nonlinear penalty function of the violation of the relaxed/dualized constraints in order to reduce the duality gap. We investigate the cases in which mixed integer convex optimization problems have an exact penalty representation using sharp augmenting functions (norms as augmenting penalty functions). We present a generalizable constructive proof technique for proving existence of exact penalty representations for mixed integer convex programs under specific conditions using the associated value functions. This generalizes the recent results for mixed integer linear programming [M. J. Feizollahi, S. Ahmed, and A. Sun, Math. Program., 161 (2017), pp. 365–387] and mixed integer quadratic progamming [X. Gu, S. Ahmed, and S. S. Dey, SIAM J. Optim., 30 (2020), pp. 781–797] while also providing an alternative proof for the aforementioned along with quantification of the finite penalty parameter in these cases.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140830776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1595-1621, June 2024. Abstract. We investigate frugal splitting operators for finite sum monotone inclusion problems. These operators utilize exactly one direct or resolvent evaluation of each operator of the sum, and the splitting operator’s output is dictated by linear combinations of these evaluations’ inputs and outputs. To facilitate analysis, we introduce a novel representation of frugal splitting operators via a generalized primal-dual resolvent. The representation is characterized by an index and four matrices, and we provide conditions on these that ensure equivalence between the classes of frugal splitting operators and generalized primal-dual resolvents. Our representation paves the way for new results regarding lifting numbers and the development of a unified convergence analysis for frugal splitting operator methods, contingent on the directly evaluated operators being cocoercive. The minimal lifting number is [math] where [math] is the number of monotone operators and [math] is the number of direct evaluations in the splitting. Notably, this lifting number is achievable only if the first and last operator evaluations are resolvent evaluations. These results generalize the minimal lifting results by Ryu and by Malitsky and Tam that consider frugal resolvent splittings. Building on our representation, we delineate a constructive method to design frugal splitting operators, exemplified in the design of a novel, convergent, and parallelizable frugal splitting operator with minimal lifting.
{"title":"Frugal Splitting Operators: Representation, Minimal Lifting, and Convergence","authors":"Martin Morin, Sebastian Banert, Pontus Giselsson","doi":"10.1137/22m1531105","DOIUrl":"https://doi.org/10.1137/22m1531105","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1595-1621, June 2024. <br/>Abstract. We investigate frugal splitting operators for finite sum monotone inclusion problems. These operators utilize exactly one direct or resolvent evaluation of each operator of the sum, and the splitting operator’s output is dictated by linear combinations of these evaluations’ inputs and outputs. To facilitate analysis, we introduce a novel representation of frugal splitting operators via a generalized primal-dual resolvent. The representation is characterized by an index and four matrices, and we provide conditions on these that ensure equivalence between the classes of frugal splitting operators and generalized primal-dual resolvents. Our representation paves the way for new results regarding lifting numbers and the development of a unified convergence analysis for frugal splitting operator methods, contingent on the directly evaluated operators being cocoercive. The minimal lifting number is [math] where [math] is the number of monotone operators and [math] is the number of direct evaluations in the splitting. Notably, this lifting number is achievable only if the first and last operator evaluations are resolvent evaluations. These results generalize the minimal lifting results by Ryu and by Malitsky and Tam that consider frugal resolvent splittings. Building on our representation, we delineate a constructive method to design frugal splitting operators, exemplified in the design of a novel, convergent, and parallelizable frugal splitting operator with minimal lifting.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140830756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1569-1594, June 2024. Abstract. In this paper, we propose several graph-based extensions of the Douglas–Rachford splitting (DRS) method to solve monotone inclusion problems involving the sum of [math] maximal monotone operators. Our construction is based on the choice of two nested graphs, to which we associate a generalization of the DRS algorithm that presents a prescribed structure. The resulting schemes can be understood as unconditionally stable frugal resolvent splitting methods with minimal lifting in the sense of Ryu [Math. Program., 182 (2020), pp. 233–273] as well as instances of the (degenerate) preconditioned proximal point method, which provides robust convergence guarantees. We further describe how the graph-based extensions of the DRS method can be leveraged to design new fully distributed protocols. Applications to a congested optimal transport problem and to distributed support vector machines show interesting connections with the underlying graph topology and highly competitive performances with state-of-the-art distributed optimization approaches.
{"title":"Graph and Distributed Extensions of the Douglas–Rachford Method","authors":"Kristian Bredies, Enis Chenchene, Emanuele Naldi","doi":"10.1137/22m1535097","DOIUrl":"https://doi.org/10.1137/22m1535097","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1569-1594, June 2024. <br/> Abstract. In this paper, we propose several graph-based extensions of the Douglas–Rachford splitting (DRS) method to solve monotone inclusion problems involving the sum of [math] maximal monotone operators. Our construction is based on the choice of two nested graphs, to which we associate a generalization of the DRS algorithm that presents a prescribed structure. The resulting schemes can be understood as unconditionally stable frugal resolvent splitting methods with minimal lifting in the sense of Ryu [Math. Program., 182 (2020), pp. 233–273] as well as instances of the (degenerate) preconditioned proximal point method, which provides robust convergence guarantees. We further describe how the graph-based extensions of the DRS method can be leveraged to design new fully distributed protocols. Applications to a congested optimal transport problem and to distributed support vector machines show interesting connections with the underlying graph topology and highly competitive performances with state-of-the-art distributed optimization approaches.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Family of (boldsymbol{s})-Rectangular Robust MDPs: Relative Conservativeness, Asymptotic Analyses, and Finite-Sample Properties","authors":"Sivaramakrishnan Ramani, A. Ghate","doi":"10.1137/23m1559920","DOIUrl":"https://doi.org/10.1137/23m1559920","url":null,"abstract":"","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto Andreani, María L. Schuverdt, Leonardo D. Secchin
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1515-1539, June 2024. Abstract. The Fritz John (FJ) and Karush–Kuhn–Tucker (KKT) conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn, and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s and later extensively studied by Bertsekas and collaborators. In this work, we revisit enhanced KKT stationarity for standard (smooth) nonlinear programming. We argue that every KKT point satisfies the usual enhanced versions found in the literature. Therefore, enhanced KKT stationarity only concerns the Lagrange multipliers. We then analyze some properties of the corresponding multipliers under the quasi-normality constraint qualification (QNCQ), showing in particular that the set of so-called quasinormal multipliers is compact under QNCQ. Also, we report some consequences of introducing an extra abstract constraint to the problem. Given that enhanced FJ/KKT concepts are obtained by aggregating sequential conditions to FJ/KKT, we discuss the relevance of our findings with respect to the well-known sequential optimality conditions, which have been crucial in generalizing the global convergence of a well-established safeguarded augmented Lagrangian method. Finally, we apply our theory to mathematical programs with complementarity constraints and multiobjective problems, improving and elucidating previous results in the literature.
{"title":"On Enhanced KKT Optimality Conditions for Smooth Nonlinear Optimization","authors":"Roberto Andreani, María L. Schuverdt, Leonardo D. Secchin","doi":"10.1137/22m1539678","DOIUrl":"https://doi.org/10.1137/22m1539678","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1515-1539, June 2024. <br/> Abstract. The Fritz John (FJ) and Karush–Kuhn–Tucker (KKT) conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn, and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s and later extensively studied by Bertsekas and collaborators. In this work, we revisit enhanced KKT stationarity for standard (smooth) nonlinear programming. We argue that every KKT point satisfies the usual enhanced versions found in the literature. Therefore, enhanced KKT stationarity only concerns the Lagrange multipliers. We then analyze some properties of the corresponding multipliers under the quasi-normality constraint qualification (QNCQ), showing in particular that the set of so-called quasinormal multipliers is compact under QNCQ. Also, we report some consequences of introducing an extra abstract constraint to the problem. Given that enhanced FJ/KKT concepts are obtained by aggregating sequential conditions to FJ/KKT, we discuss the relevance of our findings with respect to the well-known sequential optimality conditions, which have been crucial in generalizing the global convergence of a well-established safeguarded augmented Lagrangian method. Finally, we apply our theory to mathematical programs with complementarity constraints and multiobjective problems, improving and elucidating previous results in the literature.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140614679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1490-1514, June 2024. Abstract. We study optimal simple second-order cone representations (a particular subclass of second-order cone representations) for weighted geometric means, which turns out to be closely related to minimum mediated sets. Several lower bounds and upper bounds on the size of optimal simple second-order cone representations are proved. In the case of bivariate weighted geometric means (equivalently, one-dimensional mediated sets), we are able to prove the exact size of an optimal simple second-order cone representation and give an algorithm to compute one. In the genenal case, fast heuristic algorithms and traversal algorithms are proposed to compute an approximately optimal simple second-order cone representation. Finally, applications to polynomial optimization, matrix optimization, and quantum information are provided.
{"title":"Weighted Geometric Mean, Minimum Mediated Set, and Optimal Simple Second-Order Cone Representation","authors":"Jie Wang","doi":"10.1137/22m1531257","DOIUrl":"https://doi.org/10.1137/22m1531257","url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1490-1514, June 2024. <br/> Abstract. We study optimal simple second-order cone representations (a particular subclass of second-order cone representations) for weighted geometric means, which turns out to be closely related to minimum mediated sets. Several lower bounds and upper bounds on the size of optimal simple second-order cone representations are proved. In the case of bivariate weighted geometric means (equivalently, one-dimensional mediated sets), we are able to prove the exact size of an optimal simple second-order cone representation and give an algorithm to compute one. In the genenal case, fast heuristic algorithms and traversal algorithms are proposed to compute an approximately optimal simple second-order cone representation. Finally, applications to polynomial optimization, matrix optimization, and quantum information are provided.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140614539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}