{"title":"基于分类的数据分割案例研究","authors":"B. K. Sarkar","doi":"10.1504/IJIDS.2016.075788","DOIUrl":null,"url":null,"abstract":"Designing accurate model for classification problem is a real concern in context of machine learning. The various factors such as inclusion of excellent samples in the training set, the number of samples as well as the proportion of each class type in the set (that would be sufficient for designing model) play important roles in this purpose. In this article, an investigation is introduced to address the question of what proportion of the samples should be devoted to the training set for developing a better classification model. The experimental results on several datasets, using C4.5 classifier, shows that any equidistributed data partitioning in between (20%, 80%) and (30%, 70%) may be considered as the best sample partition to build classification model irrespective to domain, size and class imbalanced.","PeriodicalId":303039,"journal":{"name":"Int. J. Inf. Decis. Sci.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"A case study on partitioning data for classification\",\"authors\":\"B. K. Sarkar\",\"doi\":\"10.1504/IJIDS.2016.075788\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Designing accurate model for classification problem is a real concern in context of machine learning. The various factors such as inclusion of excellent samples in the training set, the number of samples as well as the proportion of each class type in the set (that would be sufficient for designing model) play important roles in this purpose. In this article, an investigation is introduced to address the question of what proportion of the samples should be devoted to the training set for developing a better classification model. The experimental results on several datasets, using C4.5 classifier, shows that any equidistributed data partitioning in between (20%, 80%) and (30%, 70%) may be considered as the best sample partition to build classification model irrespective to domain, size and class imbalanced.\",\"PeriodicalId\":303039,\"journal\":{\"name\":\"Int. J. Inf. Decis. Sci.\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-04-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Inf. Decis. Sci.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1504/IJIDS.2016.075788\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Inf. Decis. Sci.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/IJIDS.2016.075788","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A case study on partitioning data for classification
Designing accurate model for classification problem is a real concern in context of machine learning. The various factors such as inclusion of excellent samples in the training set, the number of samples as well as the proportion of each class type in the set (that would be sufficient for designing model) play important roles in this purpose. In this article, an investigation is introduced to address the question of what proportion of the samples should be devoted to the training set for developing a better classification model. The experimental results on several datasets, using C4.5 classifier, shows that any equidistributed data partitioning in between (20%, 80%) and (30%, 70%) may be considered as the best sample partition to build classification model irrespective to domain, size and class imbalanced.