首页 > 最新文献

2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)最新文献

英文 中文
d(mu)Reg: A Path-Aware Mutation Analysis Guided Approach to Regression Testing d(mu)Reg:一种路径感知突变分析指导的回归测试方法
Chang-ai Sun, Cuiyang Fan, Zhen Wang, Huai Liu
Regression testing re-runs some previously executed test cases, with the purpose of checking whether previously fixed faults have re-emerged and ensuring that the changes do not negatively affect the existing behaviors of the software under development. Today's software is rapidly developed and evolved, and thus it is critical to implement regression testing quickly and effectively. In this paper, we propose a novel technique for regression testing, based on a family of mutant selection strategies. The preliminary results show that the proposed technique can significantly improve the efficiency of different regression testing activities, including test case reduction and prioritization. Our work also makes it possible to develop a unified framework that effectively implements various activities in regression testing.
回归测试重新运行一些先前执行的测试用例,目的是检查先前修复的错误是否重新出现,并确保更改不会对正在开发的软件的现有行为产生负面影响。今天的软件是快速开发和发展的,因此快速有效地实现回归测试是至关重要的。在本文中,我们提出了一种新的回归测试技术,基于一系列突变选择策略。初步结果表明,所提出的技术可以显著提高不同回归测试活动的效率,包括测试用例减少和优先级划分。我们的工作也使得开发一个统一的框架成为可能,这个框架可以有效地实现回归测试中的各种活动。
{"title":"d(mu)Reg: A Path-Aware Mutation Analysis Guided Approach to Regression Testing","authors":"Chang-ai Sun, Cuiyang Fan, Zhen Wang, Huai Liu","doi":"10.1109/AST.2017.8","DOIUrl":"https://doi.org/10.1109/AST.2017.8","url":null,"abstract":"Regression testing re-runs some previously executed test cases, with the purpose of checking whether previously fixed faults have re-emerged and ensuring that the changes do not negatively affect the existing behaviors of the software under development. Today's software is rapidly developed and evolved, and thus it is critical to implement regression testing quickly and effectively. In this paper, we propose a novel technique for regression testing, based on a family of mutant selection strategies. The preliminary results show that the proposed technique can significantly improve the efficiency of different regression testing activities, including test case reduction and prioritization. Our work also makes it possible to develop a unified framework that effectively implements various activities in regression testing.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129573626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Toward Adaptive, Self-Aware Test Automation 走向自适应的、自我意识的测试自动化
Benedikt Eberhardinger, Axel Habermaier, W. Reif
Software testing plays a major role for engineering future systems that become more and more ubiquitous and also more critical for every days life. In order to fulfill the high demand, test automation is needed as a keystone. However, test automation, as it is used today, is counting on scripting and capture-and-replay and is not able to keep up with autonomous and intelligent systems. Therefore, we ask for an adaptive and autonomous test automation and propose a model-based approach that enables self-awareness as well as awareness of the system under test which is used for automation of the test suites.
软件测试在未来的工程系统中扮演着重要的角色,这些系统变得越来越普遍,对日常生活也越来越重要。为了满足高需求,测试自动化需要作为一个基石。然而,今天使用的测试自动化依赖于脚本和捕获-重放,无法跟上自主和智能系统的步伐。因此,我们要求一种自适应和自主的测试自动化,并提出一种基于模型的方法,该方法可以实现自我意识以及用于测试套件自动化的被测系统的意识。
{"title":"Toward Adaptive, Self-Aware Test Automation","authors":"Benedikt Eberhardinger, Axel Habermaier, W. Reif","doi":"10.1109/AST.2017.1","DOIUrl":"https://doi.org/10.1109/AST.2017.1","url":null,"abstract":"Software testing plays a major role for engineering future systems that become more and more ubiquitous and also more critical for every days life. In order to fulfill the high demand, test automation is needed as a keystone. However, test automation, as it is used today, is counting on scripting and capture-and-replay and is not able to keep up with autonomous and intelligent systems. Therefore, we ask for an adaptive and autonomous test automation and propose a model-based approach that enables self-awareness as well as awareness of the system under test which is used for automation of the test suites.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128142895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Transferring Software Testing Tools to Practice 将软件测试工具应用于实践
Tao Xie
Achieving successful technology adoption in practice has often been an important goal for both academic and industrial researchers. However, it is generally challenging to transfer research results into industrial products or into tools that are widely adopted. What are the key factors that lead to practical impact for a research project? This talk presents experiences and lessons learned in successfully transferring tools from two testing projects as collaborative efforts between the academia and industry. In the Pex project (research.microsoft.com/pex) [3], nearly a decade's collaborative efforts between Microsoft Research and academia have led to high-impact tools that are now shipped by Microsoft and adopted by the community. These tools include Fakes [2], a test isolation framework shipped with Visual Studio 2012/2013, IntelliTest, an automatic test generation tool shipped with Visual Studio 2015, and Code Hunt (www.codehunt.com) [1] (evolved from Pex4Fun [4]), a popular serious gaming platform for coding contests and practicing programming skills, which has attracted 350,000+ players from May 2014 to August 2016, and has been adopted in large-scale Microsoft Imagine Cup and Beauty of Programming contests. In the WeChat testing project, recent collaborative efforts [5], [6] between Tencent and academia have developed effective techniques for testing Android apps, by improving Google's Monkey, a popularly used Android testing tool in industry. The developed techniques have been applied to test WeChat, one of world's most popular messenger apps with over 800 million monthly active users.
在实践中实现技术的成功应用一直是学术界和工业界研究人员的一个重要目标。然而,将研究成果转化为工业产品或广泛采用的工具通常具有挑战性。对一个研究项目产生实际影响的关键因素是什么?本次演讲将介绍在学术界和工业界的合作下,从两个测试项目中成功转移工具的经验和教训。在Pex项目(research.microsoft.com/pex)[3]中,微软研究院和学术界之间近十年的合作努力已经产生了高影响力的工具,这些工具现在由微软发布并被社区采用。这些工具包括Fakes [2], Visual Studio 2012/2013附带的测试隔离框架,IntelliTest, Visual Studio 2015附带的自动测试生成工具,以及Code Hunt (www.codehunt.com)[1](从Pex4Fun[4]演变而来),一个流行的严肃游戏平台,用于编码比赛和练习编程技能,从2014年5月到2016年8月吸引了35万+玩家,并被大型微软创新杯和编程之美比赛采用。在微信测试项目中,腾讯和学术界最近的合作努力[5],[6]通过改进谷歌的Monkey(一种在工业中广泛使用的Android测试工具),开发了测试Android应用程序的有效技术。开发的技术已被应用于微信测试,微信是世界上最受欢迎的信使应用之一,月活跃用户超过8亿。
{"title":"Transferring Software Testing Tools to Practice","authors":"Tao Xie","doi":"10.1109/AST.2017.10","DOIUrl":"https://doi.org/10.1109/AST.2017.10","url":null,"abstract":"Achieving successful technology adoption in practice has often been an important goal for both academic and industrial researchers. However, it is generally challenging to transfer research results into industrial products or into tools that are widely adopted. What are the key factors that lead to practical impact for a research project? This talk presents experiences and lessons learned in successfully transferring tools from two testing projects as collaborative efforts between the academia and industry. In the Pex project (research.microsoft.com/pex) [3], nearly a decade's collaborative efforts between Microsoft Research and academia have led to high-impact tools that are now shipped by Microsoft and adopted by the community. These tools include Fakes [2], a test isolation framework shipped with Visual Studio 2012/2013, IntelliTest, an automatic test generation tool shipped with Visual Studio 2015, and Code Hunt (www.codehunt.com) [1] (evolved from Pex4Fun [4]), a popular serious gaming platform for coding contests and practicing programming skills, which has attracted 350,000+ players from May 2014 to August 2016, and has been adopted in large-scale Microsoft Imagine Cup and Beauty of Programming contests. In the WeChat testing project, recent collaborative efforts [5], [6] between Tencent and academia have developed effective techniques for testing Android apps, by improving Google's Monkey, a popularly used Android testing tool in industry. The developed techniques have been applied to test WeChat, one of world's most popular messenger apps with over 800 million monthly active users.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"383 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133363082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analyzing Automatic Test Generation Tools for Refactoring Validation 分析用于重构验证的自动测试生成工具
I. C. S. Silva, Everton L. G. Alves, W. Andrade
Refactoring edits are very common during agile development. Due to their inherent complexity, refactorings are know to be error prone. In this sense, refactoring edits require validation to check whether no behavior change was introduced. A valid way for validating refactorings is the use of automatically generated regression test suites. However, although popular, it is not certain whether the tools for generating tests (e.g., Randoop and EvoSuite) are in fact suitable in this context. This paper presents an exploratory study that investigated the efficiency of suites generated by automatic tools regarding their capacity of detecting refactoring faults. Our results show that both Randoop and EvoSuite suites missed more than 50% of all injected faults. Moreover, their suites include a great number of tests that could not be run integrally after the edits (obsolete test cases).
重构编辑在敏捷开发中非常常见。由于其固有的复杂性,重构很容易出错。从这个意义上说,重构编辑需要验证,以检查是否没有引入行为更改。验证重构的有效方法是使用自动生成的回归测试套件。然而,尽管很流行,但并不确定用于生成测试的工具(例如,Randoop和EvoSuite)是否真的适合这种情况。本文提出了一项探索性研究,探讨了自动工具生成的套件在检测重构错误方面的效率。我们的结果表明,Randoop和EvoSuite套件都错过了超过50%的注入断层。此外,他们的套件包含了大量的测试,这些测试不能在编辑之后完整地运行(过时的测试用例)。
{"title":"Analyzing Automatic Test Generation Tools for Refactoring Validation","authors":"I. C. S. Silva, Everton L. G. Alves, W. Andrade","doi":"10.1109/AST.2017.9","DOIUrl":"https://doi.org/10.1109/AST.2017.9","url":null,"abstract":"Refactoring edits are very common during agile development. Due to their inherent complexity, refactorings are know to be error prone. In this sense, refactoring edits require validation to check whether no behavior change was introduced. A valid way for validating refactorings is the use of automatically generated regression test suites. However, although popular, it is not certain whether the tools for generating tests (e.g., Randoop and EvoSuite) are in fact suitable in this context. This paper presents an exploratory study that investigated the efficiency of suites generated by automatic tools regarding their capacity of detecting refactoring faults. Our results show that both Randoop and EvoSuite suites missed more than 50% of all injected faults. Moreover, their suites include a great number of tests that could not be run integrally after the edits (obsolete test cases).","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
High-Coverage Testing of Navigation Models in Android Applications 高覆盖率测试安卓应用中的导航模型
Fernando Paulovsky, Esteban Pavese, D. Garbervetsky
In this work, we present a tool that systematically discovers and tests the user-observable states of an Android application. We define an appropriate notion of test coverage, and we show the tool's potential by applying it to several publicly available applications.
在这项工作中,我们提出了一种工具,可以系统地发现和测试 Android 应用程序的用户可观察状态。我们定义了测试覆盖率的适当概念,并通过将其应用于几个公开可用的应用程序来展示该工具的潜力。
{"title":"High-Coverage Testing of Navigation Models in Android Applications","authors":"Fernando Paulovsky, Esteban Pavese, D. Garbervetsky","doi":"10.1109/AST.2017.6","DOIUrl":"https://doi.org/10.1109/AST.2017.6","url":null,"abstract":"In this work, we present a tool that systematically discovers and tests the user-observable states of an Android application. We define an appropriate notion of test coverage, and we show the tool's potential by applying it to several publicly available applications.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129960814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tree Preprocessing and Test Outcome Caching for Efficient Hierarchical Delta Debugging 树预处理和测试结果缓存用于高效的分层增量调试
Renáta Hodován, Ákos Kiss, T. Gyimóthy
Test case reduction has been automated since the introduction of the minimizing Delta Debugging algorithm, but improving the efficiency of reduction is still the focus of research. This paper focuses on Hierarchical Delta Debugging, already an improvement over the original technique, and describes how its input tree and caching approach can be changed for higher efficiency. The proposed optimizations were evaluated on artificial and real test cases of 6 different input formats, and achieved an average 45% drop in the number of testing steps needed to reach the minimized results - with the best improvement being as high as 82%, giving a more than 5-fold speedup.
自从引入最小化Delta调试算法以来,测试用例减少已经自动化了,但是提高减少的效率仍然是研究的重点。本文重点介绍了分层增量调试,这是对原始技术的改进,并描述了如何更改其输入树和缓存方法以提高效率。我们在6种不同输入格式的人工和真实测试用例上对提出的优化进行了评估,达到最小化结果所需的测试步骤数量平均减少了45%——最好的改进高达82%,速度提高了5倍以上。
{"title":"Tree Preprocessing and Test Outcome Caching for Efficient Hierarchical Delta Debugging","authors":"Renáta Hodován, Ákos Kiss, T. Gyimóthy","doi":"10.1109/AST.2017.4","DOIUrl":"https://doi.org/10.1109/AST.2017.4","url":null,"abstract":"Test case reduction has been automated since the introduction of the minimizing Delta Debugging algorithm, but improving the efficiency of reduction is still the focus of research. This paper focuses on Hierarchical Delta Debugging, already an improvement over the original technique, and describes how its input tree and caching approach can be changed for higher efficiency. The proposed optimizations were evaluated on artificial and real test cases of 6 different input formats, and achieved an average 45% drop in the number of testing steps needed to reach the minimized results - with the best improvement being as high as 82%, giving a more than 5-fold speedup.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117133288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Gamification of Software Testing 软件测试的游戏化
G. Fraser
Writing good software tests is difficult, not everysoftware developer’s favorite occupation, and not a prominentaspect in programming education. However, human involvementin testing is unavoidable: What makes a test good is oftendown to intuition; what makes a test useful depends on anunderstanding of the program context; what makes a test findbugs depends on understanding the intended program behaviour.Because the consequences of insufficient testing can be dire, thispaper explores a new angle to address the testing problem:Gamification is the approach of converting potentially tediousor boring tasks to components of entertaining gameplay, wherethe competitive nature of humans motivates them to competeand excel. By applying gamification concepts to software testing,there is potential to fundamentally change software testing inseveral ways: First, gamification can help to overcome deficienciesin education, where testing is a highly neglected topic. Second,gamification engages practitioners in testing tasks they wouldotherwise neglect, and gets them to use advanced testing toolsand techniques they would otherwise not consider. Finally, gamificationmakes it possible to crowdsource complex testing tasksthrough games with a purpose. Collectively, these applications ofgamification have the potential to substantially improve softwaretesting practice, and thus software quality.
编写好的软件测试是困难的,不是每个软件开发人员最喜欢的职业,也不是编程教育的一个突出方面。然而,人类参与测试是不可避免的:测试的好坏往往取决于直觉;什么使测试有用取决于对程序上下文的理解;是什么让测试发现bug取决于对预期的程序行为的理解。因为测试不足的后果可能是可怕的,所以本文探索了一个解决测试问题的新角度:游戏化是一种将潜在的乏味任务转化为有趣游戏玩法组成部分的方法,在这种方法中,人类的竞争天性激励着他们去竞争和超越。通过将游戏化概念应用于软件测试,有可能在几个方面从根本上改变软件测试:首先,游戏化可以帮助克服教育中的缺陷,在教育中,测试是一个被高度忽视的话题。其次,游戏化让从业者参与到他们原本会忽视的测试任务中,并让他们使用他们原本不会考虑的高级测试工具和技术。最后,游戏化使得通过有目的的游戏将复杂的测试任务众包成为可能。总的来说,这些游戏化的应用有潜力极大地改进软件测试实践,从而提高软件质量。
{"title":"Gamification of Software Testing","authors":"G. Fraser","doi":"10.1109/AST.2017.20","DOIUrl":"https://doi.org/10.1109/AST.2017.20","url":null,"abstract":"Writing good software tests is difficult, not everysoftware developer’s favorite occupation, and not a prominentaspect in programming education. However, human involvementin testing is unavoidable: What makes a test good is oftendown to intuition; what makes a test useful depends on anunderstanding of the program context; what makes a test findbugs depends on understanding the intended program behaviour.Because the consequences of insufficient testing can be dire, thispaper explores a new angle to address the testing problem:Gamification is the approach of converting potentially tediousor boring tasks to components of entertaining gameplay, wherethe competitive nature of humans motivates them to competeand excel. By applying gamification concepts to software testing,there is potential to fundamentally change software testing inseveral ways: First, gamification can help to overcome deficienciesin education, where testing is a highly neglected topic. Second,gamification engages practitioners in testing tasks they wouldotherwise neglect, and gets them to use advanced testing toolsand techniques they would otherwise not consider. Finally, gamificationmakes it possible to crowdsource complex testing tasksthrough games with a purpose. Collectively, these applications ofgamification have the potential to substantially improve softwaretesting practice, and thus software quality.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131131906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Generating Unit Tests with Structured System Interactions 使用结构化系统交互生成单元测试
Nikolas Havrikov, Alessio Gambi, A. Zeller, Andrea Arcuri, Juan P. Galeotti
There is a large body of work in the literature about automatic unit tests generation, and many successful results have been reported so far. However, current approaches target library classes, but not full applications. A major obstacle for testing full applications is that they interact with the environment. For example, they access files on the hard drive or establish connections to remote servers. Thoroughly testing such applications requires tests that completely control the interactions between the application and its environment. Recent techniques based on mocking enable the generation of tests which include environment interactions, however, generating the right type of interactions is still an open problem. In this paper, we describe a novel approach which addresses this problem by enhancing search-based testing with complex test data generation. Experiments on an artificial system show that the proposed approach can generate effective unit tests. Compared with current techniques based on mocking, we generate more robust unit tests which achieve higher coverage and are, arguably, easier to read and understand.
在关于自动单元测试生成的文献中有大量的工作,到目前为止已经报道了许多成功的结果。然而,目前的方法针对库类,而不是完整的应用程序。测试完整应用程序的一个主要障碍是它们与环境交互。例如,它们访问硬盘上的文件或与远程服务器建立连接。彻底测试这类应用程序需要完全控制应用程序与其环境之间交互的测试。最近基于模拟的技术能够生成包含环境交互的测试,然而,生成正确类型的交互仍然是一个有待解决的问题。在本文中,我们描述了一种新的方法,该方法通过复杂的测试数据生成来增强基于搜索的测试。在一个人工系统上的实验表明,该方法可以生成有效的单元测试。与当前基于模拟的技术相比,我们生成了更健壮的单元测试,这些测试实现了更高的覆盖率,并且更容易阅读和理解。
{"title":"Generating Unit Tests with Structured System Interactions","authors":"Nikolas Havrikov, Alessio Gambi, A. Zeller, Andrea Arcuri, Juan P. Galeotti","doi":"10.1109/AST.2017.2","DOIUrl":"https://doi.org/10.1109/AST.2017.2","url":null,"abstract":"There is a large body of work in the literature about automatic unit tests generation, and many successful results have been reported so far. However, current approaches target library classes, but not full applications. A major obstacle for testing full applications is that they interact with the environment. For example, they access files on the hard drive or establish connections to remote servers. Thoroughly testing such applications requires tests that completely control the interactions between the application and its environment. Recent techniques based on mocking enable the generation of tests which include environment interactions, however, generating the right type of interactions is still an open problem. In this paper, we describe a novel approach which addresses this problem by enhancing search-based testing with complex test data generation. Experiments on an artificial system show that the proposed approach can generate effective unit tests. Compared with current techniques based on mocking, we generate more robust unit tests which achieve higher coverage and are, arguably, easier to read and understand.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129000293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Efficient Product-Line Testing Using Cluster-Based Product Prioritization 使用基于集群的产品优先级进行高效产品线测试
Mustafa Al-Hajjaji, J. Krüger, Sandro Schulze, Thomas Leich, G. Saake
A software product-line comprises a set of products that share a common set of features. These features can be reused to customize a product to satisfy specific needs of certain customers or markets. As the number of possible products increases exponentially for new features, testing all products is infeasible. Existing testing approaches reduce their effort by restricting the number of products (sampling) and improve their effectiveness by considering the order of tests (prioritization). In this paper, we propose a cluster-based prioritization technique to sample similar products with respect to the feature selection. We evaluate our approach using feature models of different sizes and show that cluster-based prioritization can enhance the effectiveness of product-line testing.
软件产品线由一组共享一组公共特性的产品组成。可以重用这些特性来定制产品,以满足某些客户或市场的特定需求。随着新功能的可能产品数量呈指数增长,测试所有产品是不可行的。现有的测试方法通过限制产品的数量(抽样)来减少工作量,并通过考虑测试的顺序(优先排序)来提高效率。在本文中,我们提出了一种基于聚类的优先排序技术来对相似产品进行特征选择。我们使用不同大小的特征模型来评估我们的方法,并表明基于集群的优先级可以提高产品线测试的有效性。
{"title":"Efficient Product-Line Testing Using Cluster-Based Product Prioritization","authors":"Mustafa Al-Hajjaji, J. Krüger, Sandro Schulze, Thomas Leich, G. Saake","doi":"10.1109/AST.2017.7","DOIUrl":"https://doi.org/10.1109/AST.2017.7","url":null,"abstract":"A software product-line comprises a set of products that share a common set of features. These features can be reused to customize a product to satisfy specific needs of certain customers or markets. As the number of possible products increases exponentially for new features, testing all products is infeasible. Existing testing approaches reduce their effort by restricting the number of products (sampling) and improve their effectiveness by considering the order of tests (prioritization). In this paper, we propose a cluster-based prioritization technique to sample similar products with respect to the feature selection. We evaluate our approach using feature models of different sizes and show that cluster-based prioritization can enhance the effectiveness of product-line testing.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129958013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Supporting Agile Teams with a Test Analytics Platform: A Case Study 用测试分析平台支持敏捷团队:一个案例研究
O. Liechti, J. Pasquier-Rocha, R. Reis
Continuous improvement, feedback mechanisms and automated testing are cornerstones of agile methods. We introduce the concept of test analytics, which brings these three practices together. We illustrate the concept with an industrial case study and describe the experiments run by a team who had set a goal for itself to get better at testing. Beyond technical aspects, we explain how these experiments have changed the mindset and the behaviour of the team members. We then present an open source test analytics platform, later developed to share the positive learnings with the community. We describe the platform features and architecture and explain how it can be easily put to use. Before the conclusions, we explain how test analytics fits in the broader context of software analytics and present our ideas for future work.
持续改进、反馈机制和自动化测试是敏捷方法的基石。我们将介绍测试分析的概念,它将这三种实践结合在一起。我们用一个工业案例研究来说明这个概念,并描述了一个为自己设定了更好的测试目标的团队所进行的实验。除了技术方面,我们还解释了这些实验如何改变了团队成员的心态和行为。然后,我们提出了一个开源测试分析平台,后来开发出来与社区分享积极的学习成果。我们描述了平台的特性和架构,并解释了如何方便地使用它。在得出结论之前,我们解释了测试分析如何适应更广泛的软件分析环境,并提出了我们对未来工作的想法。
{"title":"Supporting Agile Teams with a Test Analytics Platform: A Case Study","authors":"O. Liechti, J. Pasquier-Rocha, R. Reis","doi":"10.1109/AST.2017.3","DOIUrl":"https://doi.org/10.1109/AST.2017.3","url":null,"abstract":"Continuous improvement, feedback mechanisms and automated testing are cornerstones of agile methods. We introduce the concept of test analytics, which brings these three practices together. We illustrate the concept with an industrial case study and describe the experiments run by a team who had set a goal for itself to get better at testing. Beyond technical aspects, we explain how these experiments have changed the mindset and the behaviour of the team members. We then present an open source test analytics platform, later developed to share the positive learnings with the community. We describe the platform features and architecture and explain how it can be easily put to use. Before the conclusions, we explain how test analytics fits in the broader context of software analytics and present our ideas for future work.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134257752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1