Liangjun Yu, Liangxiao Jiang, Lungan Zhang, Dianhong Wang
{"title":"Weight Adjusted Naive Bayes","authors":"Liangjun Yu, Liangxiao Jiang, Lungan Zhang, Dianhong Wang","doi":"10.1109/ICTAI.2018.00129","DOIUrl":null,"url":null,"abstract":"Naive Bayes (NB) continues to be one of the top 10 data mining algorithms due to its simplicity, efficiency and efficacy, but the assumption of independence for attributes in NB is rarely true in reality. Attribute weighting is effective for overcoming the unrealistic assumption in NB, but it has received less attention than it warrants. Attribute weighting approaches can be broadly divided into two categories: filters and wrappers. In this paper, we mainly focus on wrapper attribute weighting approaches because they have generally higher classification performance than filter attribute weighting approaches. We propose a weight adjusted naive Bayes approach and simply denote it WANB. In WANB, the importance of each attribute in the classification of a training data set is learned and the weight vector reflecting this importance is updated. We use weight adjustment based on objective functions to find the optimal weight vector. We compare WANB with standard NB and its state-of-the-art attribute weighting approaches. Empirical studies on a collection of 36 benchmark datasets show that the classification performance of WANB significantly outperforms NB and all the existing filter approaches used to compare. Yet at the same time, compared to the existing wrapper approach called DEWANB, WANB is much more efficient and comprehensible.","PeriodicalId":254686,"journal":{"name":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2018.00129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Naive Bayes (NB) continues to be one of the top 10 data mining algorithms due to its simplicity, efficiency and efficacy, but the assumption of independence for attributes in NB is rarely true in reality. Attribute weighting is effective for overcoming the unrealistic assumption in NB, but it has received less attention than it warrants. Attribute weighting approaches can be broadly divided into two categories: filters and wrappers. In this paper, we mainly focus on wrapper attribute weighting approaches because they have generally higher classification performance than filter attribute weighting approaches. We propose a weight adjusted naive Bayes approach and simply denote it WANB. In WANB, the importance of each attribute in the classification of a training data set is learned and the weight vector reflecting this importance is updated. We use weight adjustment based on objective functions to find the optimal weight vector. We compare WANB with standard NB and its state-of-the-art attribute weighting approaches. Empirical studies on a collection of 36 benchmark datasets show that the classification performance of WANB significantly outperforms NB and all the existing filter approaches used to compare. Yet at the same time, compared to the existing wrapper approach called DEWANB, WANB is much more efficient and comprehensible.