{"title":"Dual Descent Augmented Lagrangian Method and Alternating Direction Method of Multipliers","authors":"Kaizhao Sun, Xu Andy Sun","doi":"10.1137/21m1449099","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 2, Page 1679-1707, June 2024. <br/> Abstract. Classical primal-dual algorithms attempt to solve [math] by alternately minimizing over the primal variable [math] through primal descent and maximizing the dual variable [math] through dual ascent. However, when [math] is highly nonconvex with complex constraints in [math], the minimization over [math] may not achieve global optimality and, hence, the dual ascent step loses its valid intuition. This observation motivates us to propose a new class of primal-dual algorithms for nonconvex constrained optimization with the key feature to reverse dual ascent to a conceptually new dual descent, in a sense, elevating the dual variable to the same status as the primal variable. Surprisingly, this new dual scheme achieves some best iteration complexities for solving nonconvex optimization problems. In particular, when the dual descent step is scaled by a fractional constant, we name it scaled dual descent (SDD), otherwise, unscaled dual descent (UDD). For nonconvex multiblock optimization with nonlinear equality constraints, we propose SDD-alternating direction method of multipliers (SDD-ADMM) and show that it finds an [math]-stationary solution in [math] iterations. The complexity is further improved to [math] and [math] under proper conditions. We also propose UDD-augmented Lagrangian method (UDD-ALM), combining UDD with ALM, for weakly convex minimization over affine constraints. We show that UDD-ALM finds an [math]-stationary solution in [math] iterations. These complexity bounds for both algorithms either achieve or improve the best-known results in the ADMM and ALM literature. Moreover, SDD-ADMM addresses a long-standing limitation of existing ADMM frameworks.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/21m1449099","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
SIAM Journal on Optimization, Volume 34, Issue 2, Page 1679-1707, June 2024. Abstract. Classical primal-dual algorithms attempt to solve [math] by alternately minimizing over the primal variable [math] through primal descent and maximizing the dual variable [math] through dual ascent. However, when [math] is highly nonconvex with complex constraints in [math], the minimization over [math] may not achieve global optimality and, hence, the dual ascent step loses its valid intuition. This observation motivates us to propose a new class of primal-dual algorithms for nonconvex constrained optimization with the key feature to reverse dual ascent to a conceptually new dual descent, in a sense, elevating the dual variable to the same status as the primal variable. Surprisingly, this new dual scheme achieves some best iteration complexities for solving nonconvex optimization problems. In particular, when the dual descent step is scaled by a fractional constant, we name it scaled dual descent (SDD), otherwise, unscaled dual descent (UDD). For nonconvex multiblock optimization with nonlinear equality constraints, we propose SDD-alternating direction method of multipliers (SDD-ADMM) and show that it finds an [math]-stationary solution in [math] iterations. The complexity is further improved to [math] and [math] under proper conditions. We also propose UDD-augmented Lagrangian method (UDD-ALM), combining UDD with ALM, for weakly convex minimization over affine constraints. We show that UDD-ALM finds an [math]-stationary solution in [math] iterations. These complexity bounds for both algorithms either achieve or improve the best-known results in the ADMM and ALM literature. Moreover, SDD-ADMM addresses a long-standing limitation of existing ADMM frameworks.
期刊介绍:
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.