Ekaterina Trimbach, Edward Duc Hien Nguyen, César A Uribe
{"title":"On Acceleration of Gradient-Based Empirical Risk Minimization using Local Polynomial Regression.","authors":"Ekaterina Trimbach, Edward Duc Hien Nguyen, César A Uribe","doi":"10.23919/ecc55457.2022.9838261","DOIUrl":null,"url":null,"abstract":"<p><p>We study the acceleration of the Local Polynomial Interpolation-based Gradient Descent method (LPI-GD) recently proposed for the approximate solution of empirical risk minimization problems (ERM). We focus on loss functions that are strongly convex and smooth with condition number <i>σ</i>. We additionally assume the loss function is <i>η</i>-Hölder continuous with respect to the data. The oracle complexity of LPI-GD is <math> <mrow><mover><mi>O</mi> <mo>˜</mo></mover> <mrow><mo>(</mo> <mrow><mi>σ</mi> <msup><mi>m</mi> <mi>d</mi></msup> <mspace></mspace> <mtext>log</mtext> <mo>(</mo> <mn>1</mn> <mo>/</mo> <mi>ε</mi> <mo>)</mo></mrow> <mo>)</mo></mrow> </mrow> </math> for a desired accuracy <i>ε</i>, where <i>d</i> is the dimension of the parameter space, and <i>m</i> is the cardinality of an approximation grid. The factor <i>m</i> <sup><i>d</i></sup> can be shown to scale as <i>O</i>((1/<i>ε</i>) <sup><i>d</i>/2<i>η</i></sup> ). LPI-GD has been shown to have better oracle complexity than gradient descent (GD) and stochastic gradient descent (SGD) for certain parameter regimes. We propose two accelerated methods for the ERM problem based on LPI-GD and show an oracle complexity of <math> <mrow><mover><mi>O</mi> <mo>˜</mo></mover> <mrow><mo>(</mo> <mrow><msqrt><mi>σ</mi></msqrt> <msup><mi>m</mi> <mi>d</mi></msup> <mspace></mspace> <mtext>log</mtext> <mo>(</mo> <mn>1</mn> <mo>/</mo> <mi>ε</mi> <mo>)</mo></mrow> <mo>)</mo></mrow> </mrow> </math> . Moreover, we provide the first empirical study on local polynomial interpolation-based gradient methods and corroborate that LPI-GD has better performance than GD and SGD in some scenarios, and the proposed methods achieve acceleration.</p>","PeriodicalId":72704,"journal":{"name":"Control Conference (ECC) ... European. European Control Conference","volume":"2022 ","pages":"429-434"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9581727/pdf/nihms-1842409.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Conference (ECC) ... European. European Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ecc55457.2022.9838261","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/8/5 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We study the acceleration of the Local Polynomial Interpolation-based Gradient Descent method (LPI-GD) recently proposed for the approximate solution of empirical risk minimization problems (ERM). We focus on loss functions that are strongly convex and smooth with condition number σ. We additionally assume the loss function is η-Hölder continuous with respect to the data. The oracle complexity of LPI-GD is for a desired accuracy ε, where d is the dimension of the parameter space, and m is the cardinality of an approximation grid. The factor md can be shown to scale as O((1/ε) d/2η ). LPI-GD has been shown to have better oracle complexity than gradient descent (GD) and stochastic gradient descent (SGD) for certain parameter regimes. We propose two accelerated methods for the ERM problem based on LPI-GD and show an oracle complexity of . Moreover, we provide the first empirical study on local polynomial interpolation-based gradient methods and corroborate that LPI-GD has better performance than GD and SGD in some scenarios, and the proposed methods achieve acceleration.