{"title":"Kernel-free Reduced Quadratic Surface Support Vector Machine with 0-1 Loss Function and L\\(_p\\)-norm Regularization","authors":"Mingyang Wu, Zhixia Yang","doi":"10.1007/s40745-024-00573-w","DOIUrl":null,"url":null,"abstract":"<div><p>This paper presents a novel nonlinear binary classification method, namely the kernel-free reduced quadratic surface support vector machine with 0-1 loss function and L<span>\\(_{p}\\)</span>-norm regularization (L<span>\\(_p\\)</span>-RQSSVM<span>\\(_{0/1}\\)</span>). It uses kernel-free trick aimed at finding a reduced quadratic surface to separate samples, without considering the cross terms in quadratic form. This saves computational costs and provides better interpretability than methods using kernel functions. In addition, adding the 0-1 loss function and L<span>\\(_p\\)</span>-norm regularization to construct our L<span>\\(_p\\)</span>-RQSSVM<span>\\(_{0/1}\\)</span> enables sample sparsity and feature sparsity. The support vector (SV) of L<span>\\(_p\\)</span>-RQSSVM<span>\\(_{0/1}\\)</span> is defined, and it is derived that all SVs fall on the support hypersurfaces. Moreover, the optimality condition is explored theoretically, and a new iterative algorithm based on the alternating direction method of multipliers (ADMM) framework is used to solve our L<span>\\(_p\\)</span>-RQSSVM<span>\\(_{0/1}\\)</span> on the selected working set. The computational complexity and convergence of the algorithm are discussed. Furthermore, numerical experiments demonstrate that our L<span>\\(_p\\)</span>-RQSSVM<span>\\(_{0/1}\\)</span> achieves better classification accuracy, less SVs, and higher computational efficiency than other methods on most datasets. It also has feature sparsity under certain conditions.</p></div>","PeriodicalId":36280,"journal":{"name":"Annals of Data Science","volume":"12 1","pages":"381 - 412"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Data Science","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s40745-024-00573-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Decision Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a novel nonlinear binary classification method, namely the kernel-free reduced quadratic surface support vector machine with 0-1 loss function and L\(_{p}\)-norm regularization (L\(_p\)-RQSSVM\(_{0/1}\)). It uses kernel-free trick aimed at finding a reduced quadratic surface to separate samples, without considering the cross terms in quadratic form. This saves computational costs and provides better interpretability than methods using kernel functions. In addition, adding the 0-1 loss function and L\(_p\)-norm regularization to construct our L\(_p\)-RQSSVM\(_{0/1}\) enables sample sparsity and feature sparsity. The support vector (SV) of L\(_p\)-RQSSVM\(_{0/1}\) is defined, and it is derived that all SVs fall on the support hypersurfaces. Moreover, the optimality condition is explored theoretically, and a new iterative algorithm based on the alternating direction method of multipliers (ADMM) framework is used to solve our L\(_p\)-RQSSVM\(_{0/1}\) on the selected working set. The computational complexity and convergence of the algorithm are discussed. Furthermore, numerical experiments demonstrate that our L\(_p\)-RQSSVM\(_{0/1}\) achieves better classification accuracy, less SVs, and higher computational efficiency than other methods on most datasets. It also has feature sparsity under certain conditions.
期刊介绍:
Annals of Data Science (ADS) publishes cutting-edge research findings, experimental results and case studies of data science. Although Data Science is regarded as an interdisciplinary field of using mathematics, statistics, databases, data mining, high-performance computing, knowledge management and virtualization to discover knowledge from Big Data, it should have its own scientific contents, such as axioms, laws and rules, which are fundamentally important for experts in different fields to explore their own interests from Big Data. ADS encourages contributors to address such challenging problems at this exchange platform. At present, how to discover knowledge from heterogeneous data under Big Data environment needs to be addressed. ADS is a series of volumes edited by either the editorial office or guest editors. Guest editors will be responsible for call-for-papers and the review process for high-quality contributions in their volumes.