{"title":"基于随机梯度下降逼近的机器学习模型速度与精度权衡优化","authors":"Jasper Kyle Catapang","doi":"10.1109/ISCMI56532.2022.10068476","DOIUrl":null,"url":null,"abstract":"Stochastic gradient descent (SGD) is a widely used optimization algorithm for training machine learning models. However, due to its slow convergence and high variance, SGD can be difficult to use in practice. In this paper, the author proposes the use of the 4th order Runge-Kutta-Nyström (RKN) method to approximate the gradient function in SGD and replace the Newton boosting and SGD found in XGBoost and multilayer perceptrons (MLPs), respectively. The new variants are called ASTRA-Boost and ASTRA perceptron, where ASTRA stands for “Accuracy-Speed Trade-off Reduction via Approximation”. Specifically, the ASTRA models, through the 4th order Runge-Kutta-Nyström, converge faster than MLP with SGD and they also produce lower variance outputs, all without compromising model accuracy and overall performance.","PeriodicalId":340397,"journal":{"name":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Optimizing Speed and Accuracy Trade-off in Machine Learning Models via Stochastic Gradient Descent Approximation\",\"authors\":\"Jasper Kyle Catapang\",\"doi\":\"10.1109/ISCMI56532.2022.10068476\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stochastic gradient descent (SGD) is a widely used optimization algorithm for training machine learning models. However, due to its slow convergence and high variance, SGD can be difficult to use in practice. In this paper, the author proposes the use of the 4th order Runge-Kutta-Nyström (RKN) method to approximate the gradient function in SGD and replace the Newton boosting and SGD found in XGBoost and multilayer perceptrons (MLPs), respectively. The new variants are called ASTRA-Boost and ASTRA perceptron, where ASTRA stands for “Accuracy-Speed Trade-off Reduction via Approximation”. Specifically, the ASTRA models, through the 4th order Runge-Kutta-Nyström, converge faster than MLP with SGD and they also produce lower variance outputs, all without compromising model accuracy and overall performance.\",\"PeriodicalId\":340397,\"journal\":{\"name\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCMI56532.2022.10068476\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI56532.2022.10068476","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimizing Speed and Accuracy Trade-off in Machine Learning Models via Stochastic Gradient Descent Approximation
Stochastic gradient descent (SGD) is a widely used optimization algorithm for training machine learning models. However, due to its slow convergence and high variance, SGD can be difficult to use in practice. In this paper, the author proposes the use of the 4th order Runge-Kutta-Nyström (RKN) method to approximate the gradient function in SGD and replace the Newton boosting and SGD found in XGBoost and multilayer perceptrons (MLPs), respectively. The new variants are called ASTRA-Boost and ASTRA perceptron, where ASTRA stands for “Accuracy-Speed Trade-off Reduction via Approximation”. Specifically, the ASTRA models, through the 4th order Runge-Kutta-Nyström, converge faster than MLP with SGD and they also produce lower variance outputs, all without compromising model accuracy and overall performance.