{"title":"改进的监督学习神经网络互信息特征选择器","authors":"Nojun Kwak, Chong-Ho Choi","doi":"10.1109/IJCNN.1999.831152","DOIUrl":null,"url":null,"abstract":"In classification problems, we use a set of attributes which are relevant, irrelevant or redundant. By selecting only the relevant attributes of the data as input features of a classifying system and excluding redundant ones, higher performance is expected with smaller computational effort. We propose an algorithm of feature selection that makes more careful use of the mutual informations between input attributes and others than the mutual information feature selector (MIFS). The proposed algorithm is applied in several feature selection problems and compared with the MIFS. Experimental results show that the proposed algorithm can be well used in feature selection problems.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"48","resultStr":"{\"title\":\"Improved mutual information feature selector for neural networks in supervised learning\",\"authors\":\"Nojun Kwak, Chong-Ho Choi\",\"doi\":\"10.1109/IJCNN.1999.831152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In classification problems, we use a set of attributes which are relevant, irrelevant or redundant. By selecting only the relevant attributes of the data as input features of a classifying system and excluding redundant ones, higher performance is expected with smaller computational effort. We propose an algorithm of feature selection that makes more careful use of the mutual informations between input attributes and others than the mutual information feature selector (MIFS). The proposed algorithm is applied in several feature selection problems and compared with the MIFS. Experimental results show that the proposed algorithm can be well used in feature selection problems.\",\"PeriodicalId\":157719,\"journal\":{\"name\":\"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"48\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1999.831152\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1999.831152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improved mutual information feature selector for neural networks in supervised learning
In classification problems, we use a set of attributes which are relevant, irrelevant or redundant. By selecting only the relevant attributes of the data as input features of a classifying system and excluding redundant ones, higher performance is expected with smaller computational effort. We propose an algorithm of feature selection that makes more careful use of the mutual informations between input attributes and others than the mutual information feature selector (MIFS). The proposed algorithm is applied in several feature selection problems and compared with the MIFS. Experimental results show that the proposed algorithm can be well used in feature selection problems.