{"title":"测试策略对集合学习缺陷预测在线优化影响的实证研究","authors":"Kensei Hamamoto, Masateru Tsunoda, Amjed Tahir, Kwabena Ebo Bennin, Akito Monden, Koji Toda, Keitaro Nakasai, Kenichi Matsumoto","doi":"arxiv-2409.06264","DOIUrl":null,"url":null,"abstract":"Ensemble learning methods have been used to enhance the reliability of defect\nprediction models. However, there is an inconclusive stability of a single\nmethod attaining the highest accuracy among various software projects. This\nwork aims to improve the performance of ensemble-learning defect prediction\namong such projects by helping select the highest accuracy ensemble methods. We\nemploy bandit algorithms (BA), an online optimization method, to select the\nhighest-accuracy ensemble method. Each software module is tested sequentially,\nand bandit algorithms utilize the test outcomes of the modules to evaluate the\nperformance of the ensemble learning methods. The test strategy followed might\nimpact the testing effort and prediction accuracy when applying online\noptimization. Hence, we analyzed the test order's influence on BA's\nperformance. In our experiment, we used six popular defect prediction datasets,\nfour ensemble learning methods such as bagging, and three test strategies such\nas testing positive-prediction modules first (PF). Our results show that when\nBA is applied with PF, the prediction accuracy improved on average, and the\nnumber of found defects increased by 7% on a minimum of five out of six\ndatasets (although with a slight increase in the testing effort by about 4%\nfrom ordinal ensemble learning). Hence, BA with PF strategy is the most\neffective to attain the highest prediction accuracy using ensemble methods on\nvarious projects.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Empirical Study of the Impact of Test Strategies on Online Optimization for Ensemble-Learning Defect Prediction\",\"authors\":\"Kensei Hamamoto, Masateru Tsunoda, Amjed Tahir, Kwabena Ebo Bennin, Akito Monden, Koji Toda, Keitaro Nakasai, Kenichi Matsumoto\",\"doi\":\"arxiv-2409.06264\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Ensemble learning methods have been used to enhance the reliability of defect\\nprediction models. However, there is an inconclusive stability of a single\\nmethod attaining the highest accuracy among various software projects. This\\nwork aims to improve the performance of ensemble-learning defect prediction\\namong such projects by helping select the highest accuracy ensemble methods. We\\nemploy bandit algorithms (BA), an online optimization method, to select the\\nhighest-accuracy ensemble method. Each software module is tested sequentially,\\nand bandit algorithms utilize the test outcomes of the modules to evaluate the\\nperformance of the ensemble learning methods. The test strategy followed might\\nimpact the testing effort and prediction accuracy when applying online\\noptimization. Hence, we analyzed the test order's influence on BA's\\nperformance. In our experiment, we used six popular defect prediction datasets,\\nfour ensemble learning methods such as bagging, and three test strategies such\\nas testing positive-prediction modules first (PF). Our results show that when\\nBA is applied with PF, the prediction accuracy improved on average, and the\\nnumber of found defects increased by 7% on a minimum of five out of six\\ndatasets (although with a slight increase in the testing effort by about 4%\\nfrom ordinal ensemble learning). Hence, BA with PF strategy is the most\\neffective to attain the highest prediction accuracy using ensemble methods on\\nvarious projects.\",\"PeriodicalId\":501278,\"journal\":{\"name\":\"arXiv - CS - Software Engineering\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06264\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06264","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
集合学习方法已被用于提高缺陷预测模型的可靠性。然而,在各种软件项目中,单一方法获得最高准确率的稳定性并不稳定。这项工作旨在通过帮助选择准确率最高的集合方法,提高集合学习缺陷预测在此类项目中的性能。我们采用一种在线优化方法--强盗算法(BA)来选择精度最高的集合方法。每个软件模块按顺序进行测试,匪算法利用模块的测试结果来评估集合学习方法的性能。在应用在线优化时,测试策略可能会影响测试工作量和预测精度。因此,我们分析了测试顺序对 BA 性能的影响。在实验中,我们使用了 6 个流行的缺陷预测数据集、4 种集合学习方法(如 bagging)和 3 种测试策略(如先测试正预测模块 (PF))。实验结果表明,当应用带有 PF 的 BA 时,预测准确率平均有所提高,在六个数据集中的五个数据集上,发现的缺陷数量至少增加了 7%(尽管与顺序集合学习相比,测试工作量略微增加了约 4%)。因此,在各种项目中使用集合方法,使用 PF 策略的 BA 是获得最高预测精度的最有效方法。
An Empirical Study of the Impact of Test Strategies on Online Optimization for Ensemble-Learning Defect Prediction
Ensemble learning methods have been used to enhance the reliability of defect
prediction models. However, there is an inconclusive stability of a single
method attaining the highest accuracy among various software projects. This
work aims to improve the performance of ensemble-learning defect prediction
among such projects by helping select the highest accuracy ensemble methods. We
employ bandit algorithms (BA), an online optimization method, to select the
highest-accuracy ensemble method. Each software module is tested sequentially,
and bandit algorithms utilize the test outcomes of the modules to evaluate the
performance of the ensemble learning methods. The test strategy followed might
impact the testing effort and prediction accuracy when applying online
optimization. Hence, we analyzed the test order's influence on BA's
performance. In our experiment, we used six popular defect prediction datasets,
four ensemble learning methods such as bagging, and three test strategies such
as testing positive-prediction modules first (PF). Our results show that when
BA is applied with PF, the prediction accuracy improved on average, and the
number of found defects increased by 7% on a minimum of five out of six
datasets (although with a slight increase in the testing effort by about 4%
from ordinal ensemble learning). Hence, BA with PF strategy is the most
effective to attain the highest prediction accuracy using ensemble methods on
various projects.