{"title":"Ensemble classifier design by parallel distributed implementation of genetic fuzzy rule selection for large data sets","authors":"Y. Nojima, S. Mihara, H. Ishibuchi","doi":"10.1109/CEC.2010.5586393","DOIUrl":null,"url":null,"abstract":"Evolutionary algorithms have been actively applied to knowledge discovery, data mining and machine learning under the name of genetics-based machine learning (GBML). The main advantage of using evolutionary algorithms in those application areas is their flexibility: Various knowledge extraction criteria such as accuracy and complexity can be easily utilized as fitness functions. On the other hand, the main disadvantage is their large computation load. It is not easy to apply evolutionary algorithms to large data sets. The scalability improvement to large data sets is one of the main research issues in GBML. In our former studies, we proposed an idea of parallel distributed implementation of GBML and examined its effectiveness for genetic fuzzy rule selection. The point of our idea was to realize a quadratic speed-up by dividing not only a population but also training data. Training data subsets were periodically rotated over sub-populations in order to prevent each sub-population from over-fitting to a specific training data subset. In this paper, we propose the use of parallel distributed implementation for the design of ensemble classifiers. An ensemble classifier is designed by combining base classifiers, each of which is obtained from each sub-population. Through computational experiments on parallel distributed genetic fuzzy rule selection, we examine the generalization ability of designed ensemble classifiers under various settings with respect to the size of training data subsets and their rotation frequency.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"37 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Congress on Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.2010.5586393","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Evolutionary algorithms have been actively applied to knowledge discovery, data mining and machine learning under the name of genetics-based machine learning (GBML). The main advantage of using evolutionary algorithms in those application areas is their flexibility: Various knowledge extraction criteria such as accuracy and complexity can be easily utilized as fitness functions. On the other hand, the main disadvantage is their large computation load. It is not easy to apply evolutionary algorithms to large data sets. The scalability improvement to large data sets is one of the main research issues in GBML. In our former studies, we proposed an idea of parallel distributed implementation of GBML and examined its effectiveness for genetic fuzzy rule selection. The point of our idea was to realize a quadratic speed-up by dividing not only a population but also training data. Training data subsets were periodically rotated over sub-populations in order to prevent each sub-population from over-fitting to a specific training data subset. In this paper, we propose the use of parallel distributed implementation for the design of ensemble classifiers. An ensemble classifier is designed by combining base classifiers, each of which is obtained from each sub-population. Through computational experiments on parallel distributed genetic fuzzy rule selection, we examine the generalization ability of designed ensemble classifiers under various settings with respect to the size of training data subsets and their rotation frequency.