{"title":"使用泰勒近似梯度改进经验风险最小化的弗兰克-沃尔夫方法","authors":"Zikai Xiong, Robert M. Freund","doi":"10.1137/22m1519286","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 3, Page 2503-2534, September 2024. <br/> Abstract. The Frank–Wolfe method has become increasingly useful in statistical and machine learning applications due to the structure-inducing properties of the iterates and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of empirical risk minimization—one of the fundamental optimization problems in statistical and machine learning—the computational effectiveness of Frank–Wolfe methods typically grows linearly in the number of data observations [math]. This is in stark contrast to the case for typical stochastic projection methods. In order to reduce this dependence on [math], we look to second-order smoothness of typical smooth loss functions (least squares loss and logistic loss, for example), and we propose amending the Frank–Wolfe method with Taylor series–approximated gradients, including variants for both deterministic and stochastic settings. Compared with current state-of-the-art methods in the regime where the optimality tolerance [math] is sufficiently small, our methods are able to simultaneously reduce the dependence on large [math] while obtaining optimal convergence rates of Frank–Wolfe methods in both convex and nonconvex settings. We also propose a novel adaptive step-size approach for which we have computational guarantees. Finally, we present computational experiments which show that our methods exhibit very significant speedups over existing methods on real-world datasets for both convex and nonconvex binary classification problems.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Using Taylor-Approximated Gradients to Improve the Frank–Wolfe Method for Empirical Risk Minimization\",\"authors\":\"Zikai Xiong, Robert M. Freund\",\"doi\":\"10.1137/22m1519286\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Journal on Optimization, Volume 34, Issue 3, Page 2503-2534, September 2024. <br/> Abstract. The Frank–Wolfe method has become increasingly useful in statistical and machine learning applications due to the structure-inducing properties of the iterates and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of empirical risk minimization—one of the fundamental optimization problems in statistical and machine learning—the computational effectiveness of Frank–Wolfe methods typically grows linearly in the number of data observations [math]. This is in stark contrast to the case for typical stochastic projection methods. In order to reduce this dependence on [math], we look to second-order smoothness of typical smooth loss functions (least squares loss and logistic loss, for example), and we propose amending the Frank–Wolfe method with Taylor series–approximated gradients, including variants for both deterministic and stochastic settings. Compared with current state-of-the-art methods in the regime where the optimality tolerance [math] is sufficiently small, our methods are able to simultaneously reduce the dependence on large [math] while obtaining optimal convergence rates of Frank–Wolfe methods in both convex and nonconvex settings. We also propose a novel adaptive step-size approach for which we have computational guarantees. Finally, we present computational experiments which show that our methods exhibit very significant speedups over existing methods on real-world datasets for both convex and nonconvex binary classification problems.\",\"PeriodicalId\":49529,\"journal\":{\"name\":\"SIAM Journal on Optimization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Journal on Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/22m1519286\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m1519286","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Using Taylor-Approximated Gradients to Improve the Frank–Wolfe Method for Empirical Risk Minimization
SIAM Journal on Optimization, Volume 34, Issue 3, Page 2503-2534, September 2024. Abstract. The Frank–Wolfe method has become increasingly useful in statistical and machine learning applications due to the structure-inducing properties of the iterates and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of empirical risk minimization—one of the fundamental optimization problems in statistical and machine learning—the computational effectiveness of Frank–Wolfe methods typically grows linearly in the number of data observations [math]. This is in stark contrast to the case for typical stochastic projection methods. In order to reduce this dependence on [math], we look to second-order smoothness of typical smooth loss functions (least squares loss and logistic loss, for example), and we propose amending the Frank–Wolfe method with Taylor series–approximated gradients, including variants for both deterministic and stochastic settings. Compared with current state-of-the-art methods in the regime where the optimality tolerance [math] is sufficiently small, our methods are able to simultaneously reduce the dependence on large [math] while obtaining optimal convergence rates of Frank–Wolfe methods in both convex and nonconvex settings. We also propose a novel adaptive step-size approach for which we have computational guarantees. Finally, we present computational experiments which show that our methods exhibit very significant speedups over existing methods on real-world datasets for both convex and nonconvex binary classification problems.
期刊介绍:
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.