{"title":"Complexity theoretic limitations on learning halfspaces","authors":"Amit Daniely","doi":"10.1145/2897518.2897520","DOIUrl":null,"url":null,"abstract":"We study the problem of agnostically learning halfspaces which is defined by a fixed but unknown distribution D on Q^n X {-1,1}. We define Err_H(D) as the least error of a halfspace classifier for D. A learner who can access D has to return a hypothesis whose error is small compared to Err_H(D). Using the recently developed method of Daniely, Linial and Shalev-Shwartz we prove hardness of learning results assuming that random K-XOR formulas are hard to (strongly) refute. We show that no efficient learning algorithm has non-trivial worst-case performance even under the guarantees that Err_H(D) <= eta for arbitrarily small constant eta>0, and that D is supported in the Boolean cube. Namely, even under these favorable conditions, and for every c>0, it is hard to return a hypothesis with error <= 1/2-n^{-c}. In particular, no efficient algorithm can achieve a constant approximation ratio. Under a stronger version of the assumption (where K can be poly-logarithmic in n), we can take eta = 2^{-log^{1-nu}(n)} for arbitrarily small nu>0. These results substantially improve on previously known results, that only show hardness of exact learning.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"196 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"119","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2897518.2897520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 119
Abstract
We study the problem of agnostically learning halfspaces which is defined by a fixed but unknown distribution D on Q^n X {-1,1}. We define Err_H(D) as the least error of a halfspace classifier for D. A learner who can access D has to return a hypothesis whose error is small compared to Err_H(D). Using the recently developed method of Daniely, Linial and Shalev-Shwartz we prove hardness of learning results assuming that random K-XOR formulas are hard to (strongly) refute. We show that no efficient learning algorithm has non-trivial worst-case performance even under the guarantees that Err_H(D) <= eta for arbitrarily small constant eta>0, and that D is supported in the Boolean cube. Namely, even under these favorable conditions, and for every c>0, it is hard to return a hypothesis with error <= 1/2-n^{-c}. In particular, no efficient algorithm can achieve a constant approximation ratio. Under a stronger version of the assumption (where K can be poly-logarithmic in n), we can take eta = 2^{-log^{1-nu}(n)} for arbitrarily small nu>0. These results substantially improve on previously known results, that only show hardness of exact learning.