B. Abbasi, Shahid Hussain, Shaista Bibi, M. A. Shah
{"title":"隶属与非隶属特征对分类决策的影响:特征选择方法评价的实证研究","authors":"B. Abbasi, Shahid Hussain, Shaista Bibi, M. A. Shah","doi":"10.23919/IConAC.2018.8749009","DOIUrl":null,"url":null,"abstract":"In text categorization, the discriminative power of classifiers, dataset characteristics, and construction of the more representative feature set play an important role in classification decisions. Subsequently, in text categorization, filter based feature selection methods are used rather than wrapper and embedded methods. In terms of construction of an illustrative feature set, a number of global and local filter based feature selection methods are used with their respective pros and cons. The inclusion and exclusion of membership and non-membership features in a constructed feature set depends on the discriminative power of the feature selection method. Though, there are few studies which have reported the impact of non-membership features on the classification decision. However, to best of our knowledge, there is no detail study, which calibrates the effectiveness of the feature selection method in terms of inclusion of non-membership features to improve the classification decisions. Consequently, in this paper, we conduct an empirical study to investigate the effectiveness of four well-known filter based feature selection methods, namely IG, $\\chi 2$, RF, and DF. Subsequently, we perform a case study in the context of classification of the Gang-of-Four software design patterns. The results show that the balance consideration of membership and non-membership features has a positive impact on the performance of the classifier and classification decision can be improved. It has also been concluded that random forest is best among existing methods in considering an equal number of membership and non-membership features and the classifiers show better performance with this method as compare to others.","PeriodicalId":121030,"journal":{"name":"2018 24th International Conference on Automation and Computing (ICAC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Impact of Membership and Non-membership Features on Classification Decision: An Empirical Study for Appraisal of Feature Selection Methods\",\"authors\":\"B. Abbasi, Shahid Hussain, Shaista Bibi, M. A. Shah\",\"doi\":\"10.23919/IConAC.2018.8749009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In text categorization, the discriminative power of classifiers, dataset characteristics, and construction of the more representative feature set play an important role in classification decisions. Subsequently, in text categorization, filter based feature selection methods are used rather than wrapper and embedded methods. In terms of construction of an illustrative feature set, a number of global and local filter based feature selection methods are used with their respective pros and cons. The inclusion and exclusion of membership and non-membership features in a constructed feature set depends on the discriminative power of the feature selection method. Though, there are few studies which have reported the impact of non-membership features on the classification decision. However, to best of our knowledge, there is no detail study, which calibrates the effectiveness of the feature selection method in terms of inclusion of non-membership features to improve the classification decisions. Consequently, in this paper, we conduct an empirical study to investigate the effectiveness of four well-known filter based feature selection methods, namely IG, $\\\\chi 2$, RF, and DF. Subsequently, we perform a case study in the context of classification of the Gang-of-Four software design patterns. The results show that the balance consideration of membership and non-membership features has a positive impact on the performance of the classifier and classification decision can be improved. It has also been concluded that random forest is best among existing methods in considering an equal number of membership and non-membership features and the classifiers show better performance with this method as compare to others.\",\"PeriodicalId\":121030,\"journal\":{\"name\":\"2018 24th International Conference on Automation and Computing (ICAC)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 24th International Conference on Automation and Computing (ICAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/IConAC.2018.8749009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 24th International Conference on Automation and Computing (ICAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/IConAC.2018.8749009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Impact of Membership and Non-membership Features on Classification Decision: An Empirical Study for Appraisal of Feature Selection Methods
In text categorization, the discriminative power of classifiers, dataset characteristics, and construction of the more representative feature set play an important role in classification decisions. Subsequently, in text categorization, filter based feature selection methods are used rather than wrapper and embedded methods. In terms of construction of an illustrative feature set, a number of global and local filter based feature selection methods are used with their respective pros and cons. The inclusion and exclusion of membership and non-membership features in a constructed feature set depends on the discriminative power of the feature selection method. Though, there are few studies which have reported the impact of non-membership features on the classification decision. However, to best of our knowledge, there is no detail study, which calibrates the effectiveness of the feature selection method in terms of inclusion of non-membership features to improve the classification decisions. Consequently, in this paper, we conduct an empirical study to investigate the effectiveness of four well-known filter based feature selection methods, namely IG, $\chi 2$, RF, and DF. Subsequently, we perform a case study in the context of classification of the Gang-of-Four software design patterns. The results show that the balance consideration of membership and non-membership features has a positive impact on the performance of the classifier and classification decision can be improved. It has also been concluded that random forest is best among existing methods in considering an equal number of membership and non-membership features and the classifiers show better performance with this method as compare to others.