{"title":"根据ROC参数评估分类器性能的非参数方法的比较","authors":"W. Yousef, R. F. Wagner, M. Loew","doi":"10.1109/AIPR.2004.18","DOIUrl":null,"url":null,"abstract":"The most common metric to assess a classifier's performance is the classification error rate, or the probability of misclassification (PMC). Receiver operating characteristic (ROC) analysis is a more general way to measure the performance. Some metrics that summarize the ROC curve are the two normal-deviate-axes parameters, i.e., a and b, and the area under the curve (AUC). The parameters \"a\" and \"b\" represent the intercept and slope, respectively, for the ROC curve if plotted on normal-deviate-axes scale. AUC represents the average of the classifier TPF over FPF resulting from considering different threshold values. In the present work, we used Monte-Carlo simulations to compare different bootstrap-based estimators, e.g., leave-one-out, .632, and .632+ bootstraps, to estimate the AUC. The results show the comparable performance of the different estimators in terms of RMS, while the .632+ is the least biased.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"449 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":"{\"title\":\"Comparison of non-parametric methods for assessing classifier performance in terms of ROC parameters\",\"authors\":\"W. Yousef, R. F. Wagner, M. Loew\",\"doi\":\"10.1109/AIPR.2004.18\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The most common metric to assess a classifier's performance is the classification error rate, or the probability of misclassification (PMC). Receiver operating characteristic (ROC) analysis is a more general way to measure the performance. Some metrics that summarize the ROC curve are the two normal-deviate-axes parameters, i.e., a and b, and the area under the curve (AUC). The parameters \\\"a\\\" and \\\"b\\\" represent the intercept and slope, respectively, for the ROC curve if plotted on normal-deviate-axes scale. AUC represents the average of the classifier TPF over FPF resulting from considering different threshold values. In the present work, we used Monte-Carlo simulations to compare different bootstrap-based estimators, e.g., leave-one-out, .632, and .632+ bootstraps, to estimate the AUC. The results show the comparable performance of the different estimators in terms of RMS, while the .632+ is the least biased.\",\"PeriodicalId\":120814,\"journal\":{\"name\":\"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)\",\"volume\":\"449 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"41\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2004.18\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2004.18","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Comparison of non-parametric methods for assessing classifier performance in terms of ROC parameters
The most common metric to assess a classifier's performance is the classification error rate, or the probability of misclassification (PMC). Receiver operating characteristic (ROC) analysis is a more general way to measure the performance. Some metrics that summarize the ROC curve are the two normal-deviate-axes parameters, i.e., a and b, and the area under the curve (AUC). The parameters "a" and "b" represent the intercept and slope, respectively, for the ROC curve if plotted on normal-deviate-axes scale. AUC represents the average of the classifier TPF over FPF resulting from considering different threshold values. In the present work, we used Monte-Carlo simulations to compare different bootstrap-based estimators, e.g., leave-one-out, .632, and .632+ bootstraps, to estimate the AUC. The results show the comparable performance of the different estimators in terms of RMS, while the .632+ is the least biased.