Lessons learned from replicating a study on information-retrieval-based test case prioritization

IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Software Quality Journal Pub Date : 2023-10-16 DOI:10.1007/s11219-023-09650-4
Nasir Mehmood Minhas, Mohsin Irshad, Kai Petersen, Jürgen Börstler
{"title":"Lessons learned from replicating a study on information-retrieval-based test case prioritization","authors":"Nasir Mehmood Minhas, Mohsin Irshad, Kai Petersen, Jürgen Börstler","doi":"10.1007/s11219-023-09650-4","DOIUrl":null,"url":null,"abstract":"Abstract Replication studies help solidify and extend knowledge by evaluating previous studies’ findings. Software engineering literature showed that too few replications are conducted focusing on software artifacts without the involvement of humans. This study aims to replicate an artifact-based study on software testing to address the gap related to replications. In this investigation, we focus on (i) providing a step-by-step guide of the replication, reflecting on challenges when replicating artifact-based testing research and (ii) evaluating the replicated study concerning the validity and robustness of the findings. We replicate a test case prioritization technique proposed by Kwon et al. We replicated the original study using six software programs, four from the original study and two additional software programs. We automated the steps of the original study using a Jupyter notebook to support future replications. Various general factors facilitating replications are identified, such as (1) the importance of documentation; (2) the need for assistance from the original authors; (3) issues in the maintenance of open-source repositories (e.g., concerning needed software dependencies, versioning); and (4) availability of scripts. We also noted observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. We conclude that the study by Kwon et al. is partially replicable for small software programs and could be automated to facilitate software practitioners, given the availability of required information. However, it is hard to implement the technique for large software programs with the current guidelines. Based on lessons learned, we suggest that the authors of original studies need to publish their data and experimental setup to support the external replications.","PeriodicalId":21827,"journal":{"name":"Software Quality Journal","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Quality Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11219-023-09650-4","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Replication studies help solidify and extend knowledge by evaluating previous studies’ findings. Software engineering literature showed that too few replications are conducted focusing on software artifacts without the involvement of humans. This study aims to replicate an artifact-based study on software testing to address the gap related to replications. In this investigation, we focus on (i) providing a step-by-step guide of the replication, reflecting on challenges when replicating artifact-based testing research and (ii) evaluating the replicated study concerning the validity and robustness of the findings. We replicate a test case prioritization technique proposed by Kwon et al. We replicated the original study using six software programs, four from the original study and two additional software programs. We automated the steps of the original study using a Jupyter notebook to support future replications. Various general factors facilitating replications are identified, such as (1) the importance of documentation; (2) the need for assistance from the original authors; (3) issues in the maintenance of open-source repositories (e.g., concerning needed software dependencies, versioning); and (4) availability of scripts. We also noted observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. We conclude that the study by Kwon et al. is partially replicable for small software programs and could be automated to facilitate software practitioners, given the availability of required information. However, it is hard to implement the technique for large software programs with the current guidelines. Based on lessons learned, we suggest that the authors of original studies need to publish their data and experimental setup to support the external replications.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从复制基于信息检索的测试用例优先级研究中获得的经验教训
重复性研究通过评估先前的研究结果来帮助巩固和扩展知识。软件工程文献表明,在没有人类参与的情况下,对软件工件进行的复制太少了。本研究旨在复制一个基于工件的软件测试研究,以解决与复制相关的差距。在这项调查中,我们专注于(i)提供复制的一步一步的指导,在复制基于工件的测试研究时反映挑战,以及(ii)评估关于结果的有效性和稳健性的复制研究。我们复制了Kwon等人提出的测试用例优先排序技术。我们使用六个软件程序复制了原始研究,其中四个来自原始研究,另外两个是附加的软件程序。我们使用Jupyter笔记本自动化了原始研究的步骤,以支持未来的复制。确定了促进重复的各种一般因素,例如(1)文件的重要性;(二)是否需要原作者的协助;(3)维护开源存储库的问题(例如,涉及所需的软件依赖关系、版本控制);(4)脚本的可用性。我们还注意到特定于研究及其背景的观察结果,例如使用不同的突变工具和突变生成策略的见解。我们得出结论,Kwon等人的研究对于小型软件程序是部分可复制的,并且可以自动化以方便软件从业者,给定所需信息的可用性。然而,在目前的指导方针下,很难在大型软件程序中实现该技术。根据经验教训,我们建议原始研究的作者需要公布他们的数据和实验设置,以支持外部复制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Software Quality Journal
Software Quality Journal 工程技术-计算机:软件工程
CiteScore
4.90
自引率
5.30%
发文量
26
审稿时长
>12 weeks
期刊介绍: The aims of the Software Quality Journal are: (1) To promote awareness of the crucial role of quality management in the effective construction of the software systems developed, used, and/or maintained by organizations in pursuit of their business objectives. (2) To provide a forum of the exchange of experiences and information on software quality management and the methods, tools and products used to measure and achieve it. (3) To provide a vehicle for the publication of academic papers related to all aspects of software quality. The Journal addresses all aspects of software quality from both a practical and an academic viewpoint. It invites contributions from practitioners and academics, as well as national and international policy and standard making bodies, and sets out to be the definitive international reference source for such information. The Journal will accept research, technique, case study, survey and tutorial submissions that address quality-related issues including, but not limited to: internal and external quality standards, management of quality within organizations, technical aspects of quality, quality aspects for product vendors, software measurement and metrics, software testing and other quality assurance techniques, total quality management and cultural aspects. Other technical issues with regard to software quality, including: data management, formal methods, safety critical applications, and CASE.
期刊最新文献
Towards effective gamification of existing systems: method and experience report KeyTitle: towards better bug report title generation by keywords planning Getting into the game: gamifying software development with the GSA framework Navigating social debt and its link with technical debt in large-scale agile software development projects Programming languages ranking based on energy measurements
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1