通过可解释的机器学习模型改进软件工程中的Bug分配和开发人员分配

Comput. Pub Date : 2023-06-23 DOI:10.3390/computers12070128
Mina Samir, N. Sherief, W. Abdelmoez
{"title":"通过可解释的机器学习模型改进软件工程中的Bug分配和开发人员分配","authors":"Mina Samir, N. Sherief, W. Abdelmoez","doi":"10.3390/computers12070128","DOIUrl":null,"url":null,"abstract":"Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug’s class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning a suitable developer to a new bug by analyzing the history of developers’ profiles and analyzing the history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with “black box” AI, which even its designers cannot explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system’s effectiveness, transparency, and confidence. The trained model is utilized in the recommendation stage to calculate relevance scores for developers based on expertise and past bug handling performance, ultimately presenting the developers with the highest scores as recommendations for new bugs. This approach aims to strike a balance between computational efficiency and accurate predictions, enabling efficient bug assignment while considering developer expertise and historical performance. In this paper, we propose two explainable models for recommendation. The first is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to identify the most suitable developer for each bug from bug history.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"24 1","pages":"128"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Improving Bug Assignment and Developer Allocation in Software Engineering through Interpretable Machine Learning Models\",\"authors\":\"Mina Samir, N. Sherief, W. Abdelmoez\",\"doi\":\"10.3390/computers12070128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug’s class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning a suitable developer to a new bug by analyzing the history of developers’ profiles and analyzing the history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with “black box” AI, which even its designers cannot explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system’s effectiveness, transparency, and confidence. The trained model is utilized in the recommendation stage to calculate relevance scores for developers based on expertise and past bug handling performance, ultimately presenting the developers with the highest scores as recommendations for new bugs. This approach aims to strike a balance between computational efficiency and accurate predictions, enabling efficient bug assignment while considering developer expertise and historical performance. In this paper, we propose two explainable models for recommendation. The first is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to identify the most suitable developer for each bug from bug history.\",\"PeriodicalId\":10526,\"journal\":{\"name\":\"Comput.\",\"volume\":\"24 1\",\"pages\":\"128\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Comput.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/computers12070128\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/computers12070128","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

软件工程是一个综合的过程,需要开发人员和团队成员跨多个任务进行协作。在软件测试中,bug分类是一个冗长而耗时的过程。将错误分配给适当的开发人员可以节省时间并保持他们的动力。但是,如果不了解bug的类,则很难进行分类。在这一挑战的激励下,本文通过分析开发人员配置文件的历史,并使用基于机器学习的推荐系统分析所有开发人员的错误历史,专注于为新错误分配合适的开发人员的问题。可解释AI (Explainable AI, XAI)是人类能够理解的AI。它与“黑匣子”人工智能形成鲜明对比,后者甚至连设计者都无法解释。通过为结果提供适当的解释,用户可以更好地理解结果背后的潜在洞察力,从而提高推荐系统的有效性、透明度和信心。经过训练的模型在推荐阶段使用,根据专业知识和过去的错误处理性能为开发人员计算相关性分数,最终将最高分作为新错误的建议呈现给开发人员。这种方法的目的是在计算效率和准确预测之间取得平衡,在考虑开发人员专业知识和历史性能的同时,实现有效的bug分配。在本文中,我们提出了两个可解释的推荐模型。第一个是一个可解释的推荐模型,为个性化开发人员从bug历史中生成,以了解每个开发人员偏好的bug类型。第二个模型是基于bug的可解释推荐模型,从bug历史中为每个bug确定最合适的开发人员。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Improving Bug Assignment and Developer Allocation in Software Engineering through Interpretable Machine Learning Models
Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug’s class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning a suitable developer to a new bug by analyzing the history of developers’ profiles and analyzing the history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with “black box” AI, which even its designers cannot explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system’s effectiveness, transparency, and confidence. The trained model is utilized in the recommendation stage to calculate relevance scores for developers based on expertise and past bug handling performance, ultimately presenting the developers with the highest scores as recommendations for new bugs. This approach aims to strike a balance between computational efficiency and accurate predictions, enabling efficient bug assignment while considering developer expertise and historical performance. In this paper, we propose two explainable models for recommendation. The first is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to identify the most suitable developer for each bug from bug history.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A U-Net Architecture for Inpainting Lightstage Normal Maps Implementing Virtualization on Single-Board Computers: A Case Study on Edge Computing Electrocardiogram Signals Classification Using Deep-Learning-Based Incorporated Convolutional Neural Network and Long Short-Term Memory Framework The Mechanism of Resonant Amplification of One-Dimensional Detonation Propagating in a Non-Uniform Mixture Application of Immersive VR Serious Games in the Treatment of Schizophrenia Negative Symptoms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1