{"title":"Production and test bug report classification based on transfer learning","authors":"Misoo Kim , Youngkyoung Kim , Eunseok Lee","doi":"10.1016/j.infsof.2025.107685","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>Recent studies indicate that the classification of production and test bug reports can substantially enhance the accuracy of performance evaluation and the effectiveness of information retrieval–based bug localization (IRBL) for software reliability.</div></div><div><h3>Objective:</h3><div>However, manually classifying these bug reports is time-consuming for developers. This study introduces a production and test bug report classification (ProTeC) framework for automatically classifying these reports.</div></div><div><h3>Methods:</h3><div>The framework’s novelty lies in leveraging a set of production- and test-source files and employing transfer learning to address the issue of insufficient and sparse bug reports in machine-learning applications. The ProTeC framework trains and fine-tunes a source file classifier to develop a bug report classifier by transferring production-test distinguishing knowledge.</div></div><div><h3>Results:</h3><div>To validate the effectiveness and general practicality of ProTeC, we conducted large-scale experiments using 2,522 bug reports across 12 machine/deep learning model variations to train an automatic classifier. Our results, on average, demonstrate that ProTeC’s macro F1-score is 28.6% higher than that of a bug report-based classifier, and it can improve the mean average precision of IRBL by 17.6%.</div></div><div><h3>Conclusion:</h3><div>These positive trends were observed in most model variations, indicating that ProTeC consistently performs well in classifying bug reports regardless of the model used, thereby improving IRBL performance.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"181 ","pages":"Article 107685"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925000242","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Context:
Recent studies indicate that the classification of production and test bug reports can substantially enhance the accuracy of performance evaluation and the effectiveness of information retrieval–based bug localization (IRBL) for software reliability.
Objective:
However, manually classifying these bug reports is time-consuming for developers. This study introduces a production and test bug report classification (ProTeC) framework for automatically classifying these reports.
Methods:
The framework’s novelty lies in leveraging a set of production- and test-source files and employing transfer learning to address the issue of insufficient and sparse bug reports in machine-learning applications. The ProTeC framework trains and fine-tunes a source file classifier to develop a bug report classifier by transferring production-test distinguishing knowledge.
Results:
To validate the effectiveness and general practicality of ProTeC, we conducted large-scale experiments using 2,522 bug reports across 12 machine/deep learning model variations to train an automatic classifier. Our results, on average, demonstrate that ProTeC’s macro F1-score is 28.6% higher than that of a bug report-based classifier, and it can improve the mean average precision of IRBL by 17.6%.
Conclusion:
These positive trends were observed in most model variations, indicating that ProTeC consistently performs well in classifying bug reports regardless of the model used, thereby improving IRBL performance.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.