{"title":"基于lum损失的分类学习算法误差分析","authors":"Xuqing He, Hongwei Sun","doi":"10.3934/mfc.2022028","DOIUrl":null,"url":null,"abstract":"<p style='text-indent:20px;'>In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space <inline-formula><tex-math id=\"M1\">\\begin{document}$ {\\mathcal H}_K $\\end{document}</tex-math></inline-formula>, and the penalty term is denoted by the norm of the function in <inline-formula><tex-math id=\"M2\">\\begin{document}$ {\\mathcal H}_K $\\end{document}</tex-math></inline-formula>. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [<xref ref-type=\"bibr\" rid=\"b3\">3</xref>] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer <inline-formula><tex-math id=\"M3\">\\begin{document}$ f_V $\\end{document}</tex-math></inline-formula> of the generalization error can be approximated by the hypothesis space <inline-formula><tex-math id=\"M4\">\\begin{document}$ {\\mathcal H}_K $\\end{document}</tex-math></inline-formula>, and by a leave one out variant technique proposed in [<xref ref-type=\"bibr\" rid=\"b13\">13</xref>], satisfying error bound and learning rate about the mean of excess classification error are deduced.</p>","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"78 1","pages":"616-624"},"PeriodicalIF":1.3000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Error analysis of classification learning algorithms based on LUMs loss\",\"authors\":\"Xuqing He, Hongwei Sun\",\"doi\":\"10.3934/mfc.2022028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p style='text-indent:20px;'>In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space <inline-formula><tex-math id=\\\"M1\\\">\\\\begin{document}$ {\\\\mathcal H}_K $\\\\end{document}</tex-math></inline-formula>, and the penalty term is denoted by the norm of the function in <inline-formula><tex-math id=\\\"M2\\\">\\\\begin{document}$ {\\\\mathcal H}_K $\\\\end{document}</tex-math></inline-formula>. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [<xref ref-type=\\\"bibr\\\" rid=\\\"b3\\\">3</xref>] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer <inline-formula><tex-math id=\\\"M3\\\">\\\\begin{document}$ f_V $\\\\end{document}</tex-math></inline-formula> of the generalization error can be approximated by the hypothesis space <inline-formula><tex-math id=\\\"M4\\\">\\\\begin{document}$ {\\\\mathcal H}_K $\\\\end{document}</tex-math></inline-formula>, and by a leave one out variant technique proposed in [<xref ref-type=\\\"bibr\\\" rid=\\\"b13\\\">13</xref>], satisfying error bound and learning rate about the mean of excess classification error are deduced.</p>\",\"PeriodicalId\":93334,\"journal\":{\"name\":\"Mathematical foundations of computing\",\"volume\":\"78 1\",\"pages\":\"616-624\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mathematical foundations of computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3934/mfc.2022028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical foundations of computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3934/mfc.2022028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 1
摘要
In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space \begin{document}$ {\mathcal H}_K $\end{document}, and the penalty term is denoted by the norm of the function in \begin{document}$ {\mathcal H}_K $\end{document}. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [3] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer \begin{document}$ f_V $\end{document} of the generalization error can be approximated by the hypothesis space \begin{document}$ {\mathcal H}_K $\end{document}, and by a leave one out variant technique proposed in [13], satisfying error bound and learning rate about the mean of excess classification error are deduced.
Error analysis of classification learning algorithms based on LUMs loss
In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space \begin{document}$ {\mathcal H}_K $\end{document}, and the penalty term is denoted by the norm of the function in \begin{document}$ {\mathcal H}_K $\end{document}. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [3] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer \begin{document}$ f_V $\end{document} of the generalization error can be approximated by the hypothesis space \begin{document}$ {\mathcal H}_K $\end{document}, and by a leave one out variant technique proposed in [13], satisfying error bound and learning rate about the mean of excess classification error are deduced.