{"title":"An Empirical Study of the Impact of Test Strategies on Online Optimization for Ensemble-Learning Defect Prediction","authors":"Kensei Hamamoto, Masateru Tsunoda, Amjed Tahir, Kwabena Ebo Bennin, Akito Monden, Koji Toda, Keitaro Nakasai, Kenichi Matsumoto","doi":"arxiv-2409.06264","DOIUrl":null,"url":null,"abstract":"Ensemble learning methods have been used to enhance the reliability of defect\nprediction models. However, there is an inconclusive stability of a single\nmethod attaining the highest accuracy among various software projects. This\nwork aims to improve the performance of ensemble-learning defect prediction\namong such projects by helping select the highest accuracy ensemble methods. We\nemploy bandit algorithms (BA), an online optimization method, to select the\nhighest-accuracy ensemble method. Each software module is tested sequentially,\nand bandit algorithms utilize the test outcomes of the modules to evaluate the\nperformance of the ensemble learning methods. The test strategy followed might\nimpact the testing effort and prediction accuracy when applying online\noptimization. Hence, we analyzed the test order's influence on BA's\nperformance. In our experiment, we used six popular defect prediction datasets,\nfour ensemble learning methods such as bagging, and three test strategies such\nas testing positive-prediction modules first (PF). Our results show that when\nBA is applied with PF, the prediction accuracy improved on average, and the\nnumber of found defects increased by 7% on a minimum of five out of six\ndatasets (although with a slight increase in the testing effort by about 4%\nfrom ordinal ensemble learning). Hence, BA with PF strategy is the most\neffective to attain the highest prediction accuracy using ensemble methods on\nvarious projects.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06264","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ensemble learning methods have been used to enhance the reliability of defect
prediction models. However, there is an inconclusive stability of a single
method attaining the highest accuracy among various software projects. This
work aims to improve the performance of ensemble-learning defect prediction
among such projects by helping select the highest accuracy ensemble methods. We
employ bandit algorithms (BA), an online optimization method, to select the
highest-accuracy ensemble method. Each software module is tested sequentially,
and bandit algorithms utilize the test outcomes of the modules to evaluate the
performance of the ensemble learning methods. The test strategy followed might
impact the testing effort and prediction accuracy when applying online
optimization. Hence, we analyzed the test order's influence on BA's
performance. In our experiment, we used six popular defect prediction datasets,
four ensemble learning methods such as bagging, and three test strategies such
as testing positive-prediction modules first (PF). Our results show that when
BA is applied with PF, the prediction accuracy improved on average, and the
number of found defects increased by 7% on a minimum of five out of six
datasets (although with a slight increase in the testing effort by about 4%
from ordinal ensemble learning). Hence, BA with PF strategy is the most
effective to attain the highest prediction accuracy using ensemble methods on
various projects.