{"title":"Performance Improvement of Logistic Regression for Binary Classification by Gauss-Newton Method","authors":"M. Jamhuri, I. Mukhlash, M. I. Irawan","doi":"10.1145/3545839.3545842","DOIUrl":null,"url":null,"abstract":"This paper proposes a new approach to optimizing cost function for binary logistic regression by the Gauss-Newton method. This method was applied to the backpropagation phase as a part of the training process to update the weighted coefficients. To show the performance of the approach, we used two data sets to train the logistic regression model for binary classification problems. Our experiment demonstrated that the proposed methods could perform better than gradient descent for both examples, as we expected. Furthermore, the performance of our approach is more advanced than the classical method, either in speed or accuracy.","PeriodicalId":249161,"journal":{"name":"Proceedings of the 2022 5th International Conference on Mathematics and Statistics","volume":"193 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Mathematics and Statistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3545839.3545842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a new approach to optimizing cost function for binary logistic regression by the Gauss-Newton method. This method was applied to the backpropagation phase as a part of the training process to update the weighted coefficients. To show the performance of the approach, we used two data sets to train the logistic regression model for binary classification problems. Our experiment demonstrated that the proposed methods could perform better than gradient descent for both examples, as we expected. Furthermore, the performance of our approach is more advanced than the classical method, either in speed or accuracy.