{"title":"在L/sub p/度量下k-近邻分类器的有限样本风险","authors":"R. Snapp, S. S. Venkatesh","doi":"10.1109/WITS.1994.513925","DOIUrl":null,"url":null,"abstract":"The finite-sample risk of the k-nearest neighbor classifier that uses an L/sub 2/ distance function is examined. For a family of classification problems with smooth distributions in R/sup n/, the risk can be represented as an asymptotic expansion in inverse powers of the n-th root of the reference-sample size. The leading coefficients of this expansion suggest that the Euclidean or L/sub 2/ distance function minimizes the risk for sufficiently large reference samples.","PeriodicalId":423518,"journal":{"name":"Proceedings of 1994 Workshop on Information Theory and Statistics","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The finite-sample risk of the k-nearest-neighbor classifier under the L/sub p/ metric\",\"authors\":\"R. Snapp, S. S. Venkatesh\",\"doi\":\"10.1109/WITS.1994.513925\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The finite-sample risk of the k-nearest neighbor classifier that uses an L/sub 2/ distance function is examined. For a family of classification problems with smooth distributions in R/sup n/, the risk can be represented as an asymptotic expansion in inverse powers of the n-th root of the reference-sample size. The leading coefficients of this expansion suggest that the Euclidean or L/sub 2/ distance function minimizes the risk for sufficiently large reference samples.\",\"PeriodicalId\":423518,\"journal\":{\"name\":\"Proceedings of 1994 Workshop on Information Theory and Statistics\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-10-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 1994 Workshop on Information Theory and Statistics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WITS.1994.513925\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 Workshop on Information Theory and Statistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WITS.1994.513925","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The finite-sample risk of the k-nearest-neighbor classifier under the L/sub p/ metric
The finite-sample risk of the k-nearest neighbor classifier that uses an L/sub 2/ distance function is examined. For a family of classification problems with smooth distributions in R/sup n/, the risk can be represented as an asymptotic expansion in inverse powers of the n-th root of the reference-sample size. The leading coefficients of this expansion suggest that the Euclidean or L/sub 2/ distance function minimizes the risk for sufficiently large reference samples.