{"title":"On the Convergence of Multi-Objective Descent Algorithms","authors":"Martin Brown, Nicky Hutauruk","doi":"10.1109/MCDM.2007.369447","DOIUrl":null,"url":null,"abstract":"This paper investigates the convergence paths, rate of convergence and the convergence half-space associated with a class of descent multi-objective optimization algorithms. The first order descent algorithms are defined by maximizing the local objectives' reductions which can be interpreted in either the primal space (parameters) or the dual space (objectives). It is shown that the convergence paths are often aligned with a subset of the objectives gradients and that, in the limit, the convergence path is perpendicular to the local Pareto set. Similarities and differences are established for a range of p-norm descent algorithms. Bounds on the rate of convergence are established by considering the stability of first order learning rules. In addition, it is shown that the multi-objective descent algorithms implicitly generate a half-space which defines a convergence condition for family of optimization algorithms. Any procedure that generates updates that lie in this half-space will converge to the local Pareto set. This can be used to motivate the development of second order algorithms","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCDM.2007.369447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper investigates the convergence paths, rate of convergence and the convergence half-space associated with a class of descent multi-objective optimization algorithms. The first order descent algorithms are defined by maximizing the local objectives' reductions which can be interpreted in either the primal space (parameters) or the dual space (objectives). It is shown that the convergence paths are often aligned with a subset of the objectives gradients and that, in the limit, the convergence path is perpendicular to the local Pareto set. Similarities and differences are established for a range of p-norm descent algorithms. Bounds on the rate of convergence are established by considering the stability of first order learning rules. In addition, it is shown that the multi-objective descent algorithms implicitly generate a half-space which defines a convergence condition for family of optimization algorithms. Any procedure that generates updates that lie in this half-space will converge to the local Pareto set. This can be used to motivate the development of second order algorithms