{"title":"求解非光滑凸损失函数问题的批量随机子梯度法","authors":"KasimuJuma Ahmed","doi":"10.5121/csit.2023.131806","DOIUrl":null,"url":null,"abstract":"Mean Absolute Error (MAE) and Mean Square Error (MSE) are machine learning loss functions that not only estimates the discrepancy between prediction and true label but also guide the optimal parameter of the model.Gradient is used in estimating MSE model and Sub-gradient in estimating MAE model. Batch and stochastic are two of the many variations of sub-gradient method but the former considers the entire dataset per iteration while the latter considers one data point per iteration. Batch-stochastic Sub-gradient method that learn based on the inputted data and gives stable estimated loss value than that of stochastic and memory efficient than that of batch has been developed by considering defined collection of data-point per iteration. The stability and memory efficiency of the method was tested using structured query language (SQL). The new method shows greater stability, accuracy, convergence, memory efficiencyand computational efficiency than any other existing method of finding optimal feasible parameter(s) of a continuous data.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"101 7-8","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Batch-Stochastic Sub-Gradient Method for Solving Non-Smooth Convex Loss Function Problems\",\"authors\":\"KasimuJuma Ahmed\",\"doi\":\"10.5121/csit.2023.131806\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mean Absolute Error (MAE) and Mean Square Error (MSE) are machine learning loss functions that not only estimates the discrepancy between prediction and true label but also guide the optimal parameter of the model.Gradient is used in estimating MSE model and Sub-gradient in estimating MAE model. Batch and stochastic are two of the many variations of sub-gradient method but the former considers the entire dataset per iteration while the latter considers one data point per iteration. Batch-stochastic Sub-gradient method that learn based on the inputted data and gives stable estimated loss value than that of stochastic and memory efficient than that of batch has been developed by considering defined collection of data-point per iteration. The stability and memory efficiency of the method was tested using structured query language (SQL). The new method shows greater stability, accuracy, convergence, memory efficiencyand computational efficiency than any other existing method of finding optimal feasible parameter(s) of a continuous data.\",\"PeriodicalId\":91205,\"journal\":{\"name\":\"Artificial intelligence and applications (Commerce, Calif.)\",\"volume\":\"101 7-8\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial intelligence and applications (Commerce, Calif.)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5121/csit.2023.131806\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial intelligence and applications (Commerce, Calif.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/csit.2023.131806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Batch-Stochastic Sub-Gradient Method for Solving Non-Smooth Convex Loss Function Problems
Mean Absolute Error (MAE) and Mean Square Error (MSE) are machine learning loss functions that not only estimates the discrepancy between prediction and true label but also guide the optimal parameter of the model.Gradient is used in estimating MSE model and Sub-gradient in estimating MAE model. Batch and stochastic are two of the many variations of sub-gradient method but the former considers the entire dataset per iteration while the latter considers one data point per iteration. Batch-stochastic Sub-gradient method that learn based on the inputted data and gives stable estimated loss value than that of stochastic and memory efficient than that of batch has been developed by considering defined collection of data-point per iteration. The stability and memory efficiency of the method was tested using structured query language (SQL). The new method shows greater stability, accuracy, convergence, memory efficiencyand computational efficiency than any other existing method of finding optimal feasible parameter(s) of a continuous data.