{"title":"二元模式分类系统的评价方法","authors":"Chih-Fong Tsai","doi":"10.1109/IEEM.2010.5674217","DOIUrl":null,"url":null,"abstract":"Evaluation of pattern classification systems is the critical and important step in order to understand the system's performance over a chosen testing dataset. In general, considering cross validation can produce the ‘optimal’ or ‘objective’ classification result. As some ground-truth dataset(s) are usually used for simulating the system's classification performance, this may be somehow difficult to judge the system, which can provide similar performances for future unknown events. That is, when the system facing the real world cases are unlikely to provide as similar classification performances as the simulation results. This paper presents an ARS evaluation framework for binary pattern classification systems to solve the limitation of using the ground-truth dataset during system simulation. It is based on accuracy, reliability, and stability testing strategies. The experimental results based on the bankruptcy prediction case show that the proposed evaluation framework can solve the limitation of using some chosen testing set and allow us to understand more about the system's classification performances.","PeriodicalId":285694,"journal":{"name":"2010 IEEE International Conference on Industrial Engineering and Engineering Management","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"An evaluation methodology for binary pattern classification systems\",\"authors\":\"Chih-Fong Tsai\",\"doi\":\"10.1109/IEEM.2010.5674217\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evaluation of pattern classification systems is the critical and important step in order to understand the system's performance over a chosen testing dataset. In general, considering cross validation can produce the ‘optimal’ or ‘objective’ classification result. As some ground-truth dataset(s) are usually used for simulating the system's classification performance, this may be somehow difficult to judge the system, which can provide similar performances for future unknown events. That is, when the system facing the real world cases are unlikely to provide as similar classification performances as the simulation results. This paper presents an ARS evaluation framework for binary pattern classification systems to solve the limitation of using the ground-truth dataset during system simulation. It is based on accuracy, reliability, and stability testing strategies. The experimental results based on the bankruptcy prediction case show that the proposed evaluation framework can solve the limitation of using some chosen testing set and allow us to understand more about the system's classification performances.\",\"PeriodicalId\":285694,\"journal\":{\"name\":\"2010 IEEE International Conference on Industrial Engineering and Engineering Management\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 IEEE International Conference on Industrial Engineering and Engineering Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IEEM.2010.5674217\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE International Conference on Industrial Engineering and Engineering Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEEM.2010.5674217","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An evaluation methodology for binary pattern classification systems
Evaluation of pattern classification systems is the critical and important step in order to understand the system's performance over a chosen testing dataset. In general, considering cross validation can produce the ‘optimal’ or ‘objective’ classification result. As some ground-truth dataset(s) are usually used for simulating the system's classification performance, this may be somehow difficult to judge the system, which can provide similar performances for future unknown events. That is, when the system facing the real world cases are unlikely to provide as similar classification performances as the simulation results. This paper presents an ARS evaluation framework for binary pattern classification systems to solve the limitation of using the ground-truth dataset during system simulation. It is based on accuracy, reliability, and stability testing strategies. The experimental results based on the bankruptcy prediction case show that the proposed evaluation framework can solve the limitation of using some chosen testing set and allow us to understand more about the system's classification performances.