{"title":"采用预处理波利克步长的随机梯度下降法","authors":"F. Abdukhakimov, C. Xiang, D. Kamzolov, M. Takáč","doi":"10.1134/s0965542524700052","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems. These methods display valuable properties and attract researchers and industrial machine learning engineers with their simplicity. However, one of the weaknesses of this type of methods is the necessity to tune learning rate (step-size) for every loss function and dataset combination to solve an optimization problem and get an efficient performance in a given time budget. Stochastic Gradient Descent with Polyak Step-size (SPS) is a method that offers an update rule that alleviates the need of fine-tuning the learning rate of an optimizer. In this paper, we propose an extension of SPS that employs preconditioning techniques, such as Hutchinson’s method, Adam, and AdaGrad, to improve its performance on badly scaled and/or ill-conditioned datasets.</p>","PeriodicalId":55230,"journal":{"name":"Computational Mathematics and Mathematical Physics","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Stochastic Gradient Descent with Preconditioned Polyak Step-Size\",\"authors\":\"F. Abdukhakimov, C. Xiang, D. Kamzolov, M. Takáč\",\"doi\":\"10.1134/s0965542524700052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3 data-test=\\\"abstract-sub-heading\\\">Abstract</h3><p>Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems. These methods display valuable properties and attract researchers and industrial machine learning engineers with their simplicity. However, one of the weaknesses of this type of methods is the necessity to tune learning rate (step-size) for every loss function and dataset combination to solve an optimization problem and get an efficient performance in a given time budget. Stochastic Gradient Descent with Polyak Step-size (SPS) is a method that offers an update rule that alleviates the need of fine-tuning the learning rate of an optimizer. In this paper, we propose an extension of SPS that employs preconditioning techniques, such as Hutchinson’s method, Adam, and AdaGrad, to improve its performance on badly scaled and/or ill-conditioned datasets.</p>\",\"PeriodicalId\":55230,\"journal\":{\"name\":\"Computational Mathematics and Mathematical Physics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Mathematics and Mathematical Physics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1134/s0965542524700052\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Mathematics and Mathematical Physics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1134/s0965542524700052","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
摘要
摘要随机梯度下降法(SGD)是广泛用于解决机器学习问题的众多迭代优化方法之一。这些方法显示出宝贵的特性,并以其简单性吸引着研究人员和工业机器学习工程师。然而,这类方法的弱点之一是必须调整每个损失函数和数据集组合的学习率(步长),才能解决优化问题,并在给定的时间预算内获得高效性能。采用 Polyak 步长的随机梯度下降法(SPS)是一种提供更新规则的方法,可减轻对优化器学习率进行微调的需要。在本文中,我们提出了随机梯度下降法的扩展方案,该方案采用了 Hutchinson 方法、Adam 和 AdaGrad 等预处理技术,以提高随机梯度下降法在严重缩放和/或条件不良数据集上的性能。
Stochastic Gradient Descent with Preconditioned Polyak Step-Size
Abstract
Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems. These methods display valuable properties and attract researchers and industrial machine learning engineers with their simplicity. However, one of the weaknesses of this type of methods is the necessity to tune learning rate (step-size) for every loss function and dataset combination to solve an optimization problem and get an efficient performance in a given time budget. Stochastic Gradient Descent with Polyak Step-size (SPS) is a method that offers an update rule that alleviates the need of fine-tuning the learning rate of an optimizer. In this paper, we propose an extension of SPS that employs preconditioning techniques, such as Hutchinson’s method, Adam, and AdaGrad, to improve its performance on badly scaled and/or ill-conditioned datasets.
期刊介绍:
Computational Mathematics and Mathematical Physics is a monthly journal published in collaboration with the Russian Academy of Sciences. The journal includes reviews and original papers on computational mathematics, computational methods of mathematical physics, informatics, and other mathematical sciences. The journal welcomes reviews and original articles from all countries in the English or Russian language.