{"title":"具有回溯步长的广义条件梯度方法的快速收敛速率","authors":"K. Kunisch, Daniel Walter","doi":"10.3934/naco.2022026","DOIUrl":null,"url":null,"abstract":"A generalized conditional gradient method for minimizing the sum of two convex functions, one of them differentiable, is presented. This iterative method relies on two main ingredients: First, the minimization of a partially linearized objective functional to compute a descent direction and, second, a stepsize choice based on an Armijo-like condition to ensure sufficient descent in every iteration. We provide several convergence results. Under mild assumptions, the method generates sequences of iterates which converge, on subsequences, towards minimizers. Moreover, a sublinear rate of convergence for the objective functional values is derived. Second, we show that the method enjoys improved rates of convergence if the partially linearized problem fulfills certain growth estimates. Most notably these results do not require strong convexity of the objective functional. Numerical tests on a variety of challenging PDE-constrained optimization problems confirm the practical efficiency of the proposed algorithm.","PeriodicalId":44957,"journal":{"name":"Numerical Algebra Control and Optimization","volume":"15 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"On fast convergence rates for generalized conditional gradient methods with backtracking stepsize\",\"authors\":\"K. Kunisch, Daniel Walter\",\"doi\":\"10.3934/naco.2022026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A generalized conditional gradient method for minimizing the sum of two convex functions, one of them differentiable, is presented. This iterative method relies on two main ingredients: First, the minimization of a partially linearized objective functional to compute a descent direction and, second, a stepsize choice based on an Armijo-like condition to ensure sufficient descent in every iteration. We provide several convergence results. Under mild assumptions, the method generates sequences of iterates which converge, on subsequences, towards minimizers. Moreover, a sublinear rate of convergence for the objective functional values is derived. Second, we show that the method enjoys improved rates of convergence if the partially linearized problem fulfills certain growth estimates. Most notably these results do not require strong convexity of the objective functional. Numerical tests on a variety of challenging PDE-constrained optimization problems confirm the practical efficiency of the proposed algorithm.\",\"PeriodicalId\":44957,\"journal\":{\"name\":\"Numerical Algebra Control and Optimization\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2021-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Numerical Algebra Control and Optimization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3934/naco.2022026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numerical Algebra Control and Optimization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3934/naco.2022026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
On fast convergence rates for generalized conditional gradient methods with backtracking stepsize
A generalized conditional gradient method for minimizing the sum of two convex functions, one of them differentiable, is presented. This iterative method relies on two main ingredients: First, the minimization of a partially linearized objective functional to compute a descent direction and, second, a stepsize choice based on an Armijo-like condition to ensure sufficient descent in every iteration. We provide several convergence results. Under mild assumptions, the method generates sequences of iterates which converge, on subsequences, towards minimizers. Moreover, a sublinear rate of convergence for the objective functional values is derived. Second, we show that the method enjoys improved rates of convergence if the partially linearized problem fulfills certain growth estimates. Most notably these results do not require strong convexity of the objective functional. Numerical tests on a variety of challenging PDE-constrained optimization problems confirm the practical efficiency of the proposed algorithm.
期刊介绍:
Numerical Algebra, Control and Optimization (NACO) aims at publishing original papers on any non-trivial interplay between control and optimization, and numerical techniques for their underlying linear and nonlinear algebraic systems. Topics of interest to NACO include the following: original research in theory, algorithms and applications of optimization; numerical methods for linear and nonlinear algebraic systems arising in modelling, control and optimisation; and original theoretical and applied research and development in the control of systems including all facets of control theory and its applications. In the application areas, special interests are on artificial intelligence and data sciences. The journal also welcomes expository submissions on subjects of current relevance to readers of the journal. The publication of papers in NACO is free of charge.