{"title":"在项目发展过程中变化缺陷预测方法:初步调查","authors":"Salvatore Geremia, D. Tamburri","doi":"10.1109/MALTESQUE.2018.8368451","DOIUrl":null,"url":null,"abstract":"Defect prediction approaches use various features of software product or process to prioritize testing, analysis and general quality assurance activities. Such approaches require the availability of project's historical data, making them inapplicable in early phase. To cope with this problem, researchers have proposed cross-project and even cross-company prediction models, which use training material from other projects to build the model. Despite such advances, there is limited knowledge of how, as the project evolves, it would be convenient to still keep using data from other projects, and when, instead, it might become convenient to switch towards a local prediction model. This paper empirically investigates, using historical data from four open source projects, on how the performance of various kinds of defect prediction approaches — within-project prediction, local and global cross-project prediction, and mixed (injected local cross) prediction — varies over time. Results of the study are part of a long-term investigation towards supporting the customization of defect prediction models over projects' history.","PeriodicalId":345739,"journal":{"name":"2018 IEEE Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Varying defect prediction approaches during project evolution: A preliminary investigation\",\"authors\":\"Salvatore Geremia, D. Tamburri\",\"doi\":\"10.1109/MALTESQUE.2018.8368451\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Defect prediction approaches use various features of software product or process to prioritize testing, analysis and general quality assurance activities. Such approaches require the availability of project's historical data, making them inapplicable in early phase. To cope with this problem, researchers have proposed cross-project and even cross-company prediction models, which use training material from other projects to build the model. Despite such advances, there is limited knowledge of how, as the project evolves, it would be convenient to still keep using data from other projects, and when, instead, it might become convenient to switch towards a local prediction model. This paper empirically investigates, using historical data from four open source projects, on how the performance of various kinds of defect prediction approaches — within-project prediction, local and global cross-project prediction, and mixed (injected local cross) prediction — varies over time. Results of the study are part of a long-term investigation towards supporting the customization of defect prediction models over projects' history.\",\"PeriodicalId\":345739,\"journal\":{\"name\":\"2018 IEEE Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MALTESQUE.2018.8368451\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MALTESQUE.2018.8368451","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Varying defect prediction approaches during project evolution: A preliminary investigation
Defect prediction approaches use various features of software product or process to prioritize testing, analysis and general quality assurance activities. Such approaches require the availability of project's historical data, making them inapplicable in early phase. To cope with this problem, researchers have proposed cross-project and even cross-company prediction models, which use training material from other projects to build the model. Despite such advances, there is limited knowledge of how, as the project evolves, it would be convenient to still keep using data from other projects, and when, instead, it might become convenient to switch towards a local prediction model. This paper empirically investigates, using historical data from four open source projects, on how the performance of various kinds of defect prediction approaches — within-project prediction, local and global cross-project prediction, and mixed (injected local cross) prediction — varies over time. Results of the study are part of a long-term investigation towards supporting the customization of defect prediction models over projects' history.