基于知识图谱的众包测试任务分配

Peng-Xi Yang, Chao Chang, Yong Tang
{"title":"基于知识图谱的众包测试任务分配","authors":"Peng-Xi Yang, Chao Chang, Yong Tang","doi":"10.1109/QRS57517.2022.00072","DOIUrl":null,"url":null,"abstract":"The non-professional and uncertain testers in crowdsourced testing could lead to the problems of uneven test report quality, substandard test requirement coverage, a large number of repeated bug reports, and low efficiency of report reviewing. This paper designs a crowdsourced testing task assignment approach based on knowledge graph, trying to make full use of the individual advantages and crowd intelligence of crowdsourced workers in crowdsourced testing through personalized task assignment, with the goal to improve the quality of test reports and test completion efficiency. The approach includes three modules: 1) knowledge graph data acquisition: the concept of collaborative crowdsourced test is introduced, and a complete crowdsourced report submission platform is built to obtain the required data for the knowledge graph. 2) Knowledge graph feature learning: building an internal knowledge graph of the crowdsourced testing field based on the data in the platform and combining the historical task records of crowdsourced workers as input, using the machine learning model to get the crowdsourced workers’ preference for specific tasks, and integrates the three-level page coverage and bug-like status. 3) Knowledge graph task assignment: assign test tasks and audit tasks to crowdsourced workers in order to improve the coverage of test requirements and overall test efficiency. We compare the quantity and quality of bug reports in a crowdsourced test task between the task assignment system based on a knowledge graph and the system based on collaborative filtering, which proves the effectiveness of our task assignment technique.","PeriodicalId":143812,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Crowdsourced Testing Task Assignment based on Knowledge Graphs\",\"authors\":\"Peng-Xi Yang, Chao Chang, Yong Tang\",\"doi\":\"10.1109/QRS57517.2022.00072\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The non-professional and uncertain testers in crowdsourced testing could lead to the problems of uneven test report quality, substandard test requirement coverage, a large number of repeated bug reports, and low efficiency of report reviewing. This paper designs a crowdsourced testing task assignment approach based on knowledge graph, trying to make full use of the individual advantages and crowd intelligence of crowdsourced workers in crowdsourced testing through personalized task assignment, with the goal to improve the quality of test reports and test completion efficiency. The approach includes three modules: 1) knowledge graph data acquisition: the concept of collaborative crowdsourced test is introduced, and a complete crowdsourced report submission platform is built to obtain the required data for the knowledge graph. 2) Knowledge graph feature learning: building an internal knowledge graph of the crowdsourced testing field based on the data in the platform and combining the historical task records of crowdsourced workers as input, using the machine learning model to get the crowdsourced workers’ preference for specific tasks, and integrates the three-level page coverage and bug-like status. 3) Knowledge graph task assignment: assign test tasks and audit tasks to crowdsourced workers in order to improve the coverage of test requirements and overall test efficiency. We compare the quantity and quality of bug reports in a crowdsourced test task between the task assignment system based on a knowledge graph and the system based on collaborative filtering, which proves the effectiveness of our task assignment technique.\",\"PeriodicalId\":143812,\"journal\":{\"name\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/QRS57517.2022.00072\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QRS57517.2022.00072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

众包测试中测试人员的非专业和不确定性会导致测试报告质量参差不齐、测试需求覆盖率不达标、大量重复bug报告、报告评审效率低等问题。本文设计了一种基于知识图的众包测试任务分配方法,试图通过个性化的任务分配,充分利用众包工作者在众包测试中的个体优势和群体智能,以提高测试报告的质量和测试完成效率。该方法包括三个模块:1)知识图谱数据获取:引入协同众包测试的概念,构建完整的众包报告提交平台,获取知识图谱所需数据。2)知识图谱特征学习:基于平台内数据,结合众包工作者的历史任务记录作为输入,构建众包测试场内部知识图谱,利用机器学习模型获取众包工作者对特定任务的偏好,并整合三级页面覆盖率和bug样状态。3)知识图任务分配:将测试任务和审核任务分配给众包工作者,以提高测试需求的覆盖率和整体测试效率。比较了基于知识图的任务分配系统和基于协同过滤的任务分配系统在众包测试任务中bug报告的数量和质量,验证了任务分配技术的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Crowdsourced Testing Task Assignment based on Knowledge Graphs
The non-professional and uncertain testers in crowdsourced testing could lead to the problems of uneven test report quality, substandard test requirement coverage, a large number of repeated bug reports, and low efficiency of report reviewing. This paper designs a crowdsourced testing task assignment approach based on knowledge graph, trying to make full use of the individual advantages and crowd intelligence of crowdsourced workers in crowdsourced testing through personalized task assignment, with the goal to improve the quality of test reports and test completion efficiency. The approach includes three modules: 1) knowledge graph data acquisition: the concept of collaborative crowdsourced test is introduced, and a complete crowdsourced report submission platform is built to obtain the required data for the knowledge graph. 2) Knowledge graph feature learning: building an internal knowledge graph of the crowdsourced testing field based on the data in the platform and combining the historical task records of crowdsourced workers as input, using the machine learning model to get the crowdsourced workers’ preference for specific tasks, and integrates the three-level page coverage and bug-like status. 3) Knowledge graph task assignment: assign test tasks and audit tasks to crowdsourced workers in order to improve the coverage of test requirements and overall test efficiency. We compare the quantity and quality of bug reports in a crowdsourced test task between the task assignment system based on a knowledge graph and the system based on collaborative filtering, which proves the effectiveness of our task assignment technique.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Continuous Usability Requirements Evaluation based on Runtime User Behavior Mining Fine-Tuning Pre-Trained Model to Extract Undesired Behaviors from App Reviews An Empirical Study on Source Code Feature Extraction in Preprocessing of IR-Based Requirements Traceability Predictive Mutation Analysis of Test Case Prioritization for Deep Neural Networks Conceptualizing the Secure Machine Learning Operations (SecMLOps) Paradigm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1