M. Abouelenien, Xiaohui Yuan, P. Duraisamy, Xiaojing Yuan
{"title":"Improving classification performance for the minority class in highly imbalanced dataset using boosting","authors":"M. Abouelenien, Xiaohui Yuan, P. Duraisamy, Xiaojing Yuan","doi":"10.1109/ICCCNT.2012.6477850","DOIUrl":null,"url":null,"abstract":"Data imbalance is a common property in many medical and biological data and usually results in degraded generalization performance. In this article, we present a novel boosting method to address two important questions in learning from imbalanced dataset: how to maximize the performance of classifying the minority instances without compromising the performance for the majority instances? and how to select training instances to achieve a comprehensive representation of the data distribution and avoid high computational time? Our method maximizes the usage of the available samples with priority given to the minority samples. The base classifiers are weighted with their sensitivities derived from the training examples. Using synthetic and real-world datasets, we demonstrated the performance improvement of our method in both sensitivity and accuracy without major reduction in specificity. In contrast to AdaBoost, our method took much less time, which makes it applicable in real-world problems that have large amount of data.","PeriodicalId":364589,"journal":{"name":"2012 Third International Conference on Computing, Communication and Networking Technologies (ICCCNT'12)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2012-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Third International Conference on Computing, Communication and Networking Technologies (ICCCNT'12)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCNT.2012.6477850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Data imbalance is a common property in many medical and biological data and usually results in degraded generalization performance. In this article, we present a novel boosting method to address two important questions in learning from imbalanced dataset: how to maximize the performance of classifying the minority instances without compromising the performance for the majority instances? and how to select training instances to achieve a comprehensive representation of the data distribution and avoid high computational time? Our method maximizes the usage of the available samples with priority given to the minority samples. The base classifiers are weighted with their sensitivities derived from the training examples. Using synthetic and real-world datasets, we demonstrated the performance improvement of our method in both sensitivity and accuracy without major reduction in specificity. In contrast to AdaBoost, our method took much less time, which makes it applicable in real-world problems that have large amount of data.