{"title":"可解释的即时漏洞预测:我们做到了吗?","authors":"Reem Aleithan","doi":"10.1109/ICSE-Companion52605.2021.00056","DOIUrl":null,"url":null,"abstract":"Explaining the prediction results of software bug prediction models is a challenging task, which can provide useful information for developers to understand and fix the predicted bugs. Recently, Jirayus et al.'s proposed to use two model-agnostic techniques (i.e., LIME and iBreakDown) to explain the prediction results of bug prediction models. Although their experiments on file-level bug prediction show promising results, the performance of these techniques on explaining the results of just-in-time (i.e., change-level) bug prediction is unknown. This paper conducts the first empirical study to explore the explainability of these model-agnostic techniques on just-in-time bug prediction models. Specifically, this study takes a three-step approach, 1) replicating previously widely used just-in-time bug prediction models, 2) applying Local Interpretability Model-agnostic Explanation Technique (LIME) and iBreakDown on the prediction results, and 3) manually evaluating the explanations for buggy instances (i.e. positive predictions) against the root cause of the bugs. The results of our experiment show that LIME and iBreakDown fail to explain defect prediction explanations for just-in-time bug prediction models, unlike file-level. This paper urges for new approaches for explaining the results of just-in-time bug prediction models.","PeriodicalId":136929,"journal":{"name":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Explainable Just-In-Time Bug Prediction: Are We There Yet?\",\"authors\":\"Reem Aleithan\",\"doi\":\"10.1109/ICSE-Companion52605.2021.00056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explaining the prediction results of software bug prediction models is a challenging task, which can provide useful information for developers to understand and fix the predicted bugs. Recently, Jirayus et al.'s proposed to use two model-agnostic techniques (i.e., LIME and iBreakDown) to explain the prediction results of bug prediction models. Although their experiments on file-level bug prediction show promising results, the performance of these techniques on explaining the results of just-in-time (i.e., change-level) bug prediction is unknown. This paper conducts the first empirical study to explore the explainability of these model-agnostic techniques on just-in-time bug prediction models. Specifically, this study takes a three-step approach, 1) replicating previously widely used just-in-time bug prediction models, 2) applying Local Interpretability Model-agnostic Explanation Technique (LIME) and iBreakDown on the prediction results, and 3) manually evaluating the explanations for buggy instances (i.e. positive predictions) against the root cause of the bugs. The results of our experiment show that LIME and iBreakDown fail to explain defect prediction explanations for just-in-time bug prediction models, unlike file-level. This paper urges for new approaches for explaining the results of just-in-time bug prediction models.\",\"PeriodicalId\":136929,\"journal\":{\"name\":\"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSE-Companion52605.2021.00056\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSE-Companion52605.2021.00056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable Just-In-Time Bug Prediction: Are We There Yet?
Explaining the prediction results of software bug prediction models is a challenging task, which can provide useful information for developers to understand and fix the predicted bugs. Recently, Jirayus et al.'s proposed to use two model-agnostic techniques (i.e., LIME and iBreakDown) to explain the prediction results of bug prediction models. Although their experiments on file-level bug prediction show promising results, the performance of these techniques on explaining the results of just-in-time (i.e., change-level) bug prediction is unknown. This paper conducts the first empirical study to explore the explainability of these model-agnostic techniques on just-in-time bug prediction models. Specifically, this study takes a three-step approach, 1) replicating previously widely used just-in-time bug prediction models, 2) applying Local Interpretability Model-agnostic Explanation Technique (LIME) and iBreakDown on the prediction results, and 3) manually evaluating the explanations for buggy instances (i.e. positive predictions) against the root cause of the bugs. The results of our experiment show that LIME and iBreakDown fail to explain defect prediction explanations for just-in-time bug prediction models, unlike file-level. This paper urges for new approaches for explaining the results of just-in-time bug prediction models.