首页 > 最新文献

2010 Third International Conference on Software Testing, Verification and Validation最新文献

英文 中文
MuTMuT: Efficient Exploration for Mutation Testing of Multithreaded Code MuTMuT:多线程代码突变测试的有效探索
Miloš Gligorić, V. Jagannath, D. Marinov
Mutation testing is a method for measuring the quality of test suites. Given a system under test and a test suite, mutations are systematically inserted into the system, and the test suite is executed to determine which mutants it detects. A major cost of mutation testing is the time required to execute the test suite on all the mutants. This cost is even greater when the system under test is multithreaded: not only are test cases from the test suite executed on many mutants, but also each test case is executed for multiple possible thread schedules. We introduce a general framework that can reduce the time for mutation testing of multithreaded code. We present four techniques within the general framework and implement two of them in a tool called MuTMuT. We evaluate MuTMuT on eight multithreaded programs. The results show that MuTMuT reduces the time for mutation testing, substantially over a straightforward mutant execution and up to 77% with the advanced technique over the basic technique.
突变测试是一种测量测试套件质量的方法。给定一个被测试的系统和一个测试套件,突变被系统地插入到系统中,并且测试套件被执行以确定它检测到哪些突变。突变测试的主要成本是在所有突变体上执行测试套件所需的时间。当被测试的系统是多线程的时候,这个成本甚至更大:不仅测试套件中的测试用例在许多变体上执行,而且每个测试用例都是为多个可能的线程调度执行的。我们引入了一个通用框架,可以减少多线程代码的突变测试时间。我们在通用框架中介绍了四种技术,并在称为MuTMuT的工具中实现了其中两种技术。我们在八个多线程程序上评估MuTMuT。结果表明,MuTMuT减少了突变测试的时间,比直接执行突变大大减少,与基本技术相比,高级技术最多减少77%。
{"title":"MuTMuT: Efficient Exploration for Mutation Testing of Multithreaded Code","authors":"Miloš Gligorić, V. Jagannath, D. Marinov","doi":"10.1109/ICST.2010.33","DOIUrl":"https://doi.org/10.1109/ICST.2010.33","url":null,"abstract":"Mutation testing is a method for measuring the quality of test suites. Given a system under test and a test suite, mutations are systematically inserted into the system, and the test suite is executed to determine which mutants it detects. A major cost of mutation testing is the time required to execute the test suite on all the mutants. This cost is even greater when the system under test is multithreaded: not only are test cases from the test suite executed on many mutants, but also each test case is executed for multiple possible thread schedules. We introduce a general framework that can reduce the time for mutation testing of multithreaded code. We present four techniques within the general framework and implement two of them in a tool called MuTMuT. We evaluate MuTMuT on eight multithreaded programs. The results show that MuTMuT reduces the time for mutation testing, substantially over a straightforward mutant execution and up to 77% with the advanced technique over the basic technique.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123053278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Online Testing Framework for Web Services Web服务在线测试框架
Tien-Dung Cao, Patrick Félix, R. Castanet, Ismail Berrada
Testing conceptually consists of three activities: test case generation, test case execution and verdict assignment. Using online testing, test cases are generated and simultaneously executed (i.e. the complete test scenario is built during test execution). This paper presents a framework that automatically generates and executes tests "online" for conformance testing of a composite of Web services described in BPEL. The proposed framework considers unit testing and it is based on a timed modeling of BPEL specification, a distributed testing architecture and an online testing algorithm that generates, executes and assigns verdicts to every generated state in the test case.
测试在概念上由三个活动组成:测试用例生成、测试用例执行和判决分配。使用在线测试,测试用例被生成并同时执行(即在测试执行期间构建完整的测试场景)。本文提供了一个框架,该框架可以自动生成并“在线”执行测试,用于BPEL中描述的Web服务组合的一致性测试。建议的框架考虑了单元测试,它基于BPEL规范的定时建模、分布式测试体系结构和在线测试算法,该算法生成、执行和分配测试用例中每个生成状态的结论。
{"title":"Online Testing Framework for Web Services","authors":"Tien-Dung Cao, Patrick Félix, R. Castanet, Ismail Berrada","doi":"10.1109/ICST.2010.11","DOIUrl":"https://doi.org/10.1109/ICST.2010.11","url":null,"abstract":"Testing conceptually consists of three activities: test case generation, test case execution and verdict assignment. Using online testing, test cases are generated and simultaneously executed (i.e. the complete test scenario is built during test execution). This paper presents a framework that automatically generates and executes tests \"online\" for conformance testing of a composite of Web services described in BPEL. The proposed framework considers unit testing and it is based on a timed modeling of BPEL specification, a distributed testing architecture and an online testing algorithm that generates, executes and assigns verdicts to every generated state in the test case.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126570356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Text2Test: Automated Inspection of Natural Language Use Cases Text2Test:自动检查自然语言用例
A. Sinha, S. Sutton, A. Paradkar
The modularity and customer centric approach of use cases make them the preferred methods for requirement elicitation, especially in iterative software development processes as in agile programming. Numerous guidelines exist for use case style and content, but enforcing compliance to such guidelines in the industry currently requires specialized training and a strongly managed requirement elicitation process. However, often due to aggressive development schedules, organizations shy away from such extensive processes and end up capturing use cases in an ad-hoc fashion with little guidance. This results in poor quality use cases that are seldom fit for any downstream software activities. We have developed an approach for automated and “edittime”inspection of use cases based on the construction and analysis of models of use cases. Our models contain linguistic properties of the use case text along with the functional properties of the system under discussion. In this paper, we present a suite of model analysis techniques that leverage such models to validate uses cases simultaneously for their style and content. Such model analysis techniques can be combined with a robust NLP techniques to develop integrated development environments for use case authoring, as we do in Text2Test.When used in an industrial setting, Text2Test resulted in better compliance of use cases, in enhanced productivity
用例的模块化和以客户为中心的方法使它们成为需求引出的首选方法,特别是在敏捷编程的迭代软件开发过程中。对于用例的风格和内容存在着大量的指导方针,但是在行业中强制执行这些指导方针目前需要专门的培训和强有力的管理需求引出过程。然而,通常由于激进的开发计划,组织回避这种广泛的过程,并最终在缺乏指导的情况下以特别的方式捕获用例。这导致低质量的用例很少适合任何下游软件活动。我们已经开发了一种基于用例模型的构建和分析的自动化和“编辑时”用例检查的方法。我们的模型包含用例文本的语言属性以及讨论中的系统的功能属性。在本文中,我们提出了一套模型分析技术,利用这些模型来同时验证用例的样式和内容。这样的模型分析技术可以与健壮的NLP技术相结合,以开发用于用例创作的集成开发环境,正如我们在Text2Test中所做的那样。当在工业环境中使用时,Text2Test可以更好地遵从用例,从而提高生产力
{"title":"Text2Test: Automated Inspection of Natural Language Use Cases","authors":"A. Sinha, S. Sutton, A. Paradkar","doi":"10.1109/ICST.2010.19","DOIUrl":"https://doi.org/10.1109/ICST.2010.19","url":null,"abstract":"The modularity and customer centric approach of use cases make them the preferred methods for requirement elicitation, especially in iterative software development processes as in agile programming. Numerous guidelines exist for use case style and content, but enforcing compliance to such guidelines in the industry currently requires specialized training and a strongly managed requirement elicitation process. However, often due to aggressive development schedules, organizations shy away from such extensive processes and end up capturing use cases in an ad-hoc fashion with little guidance. This results in poor quality use cases that are seldom fit for any downstream software activities. We have developed an approach for automated and “edittime”inspection of use cases based on the construction and analysis of models of use cases. Our models contain linguistic properties of the use case text along with the functional properties of the system under discussion. In this paper, we present a suite of model analysis techniques that leverage such models to validate uses cases simultaneously for their style and content. Such model analysis techniques can be combined with a robust NLP techniques to develop integrated development environments for use case authoring, as we do in Text2Test.When used in an industrial setting, Text2Test resulted in better compliance of use cases, in enhanced productivity","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Automated Behavioral Regression Testing 自动化行为回归测试
Wei Jin, A. Orso, Tao Xie
When a program is modified during software evolution, developers typically run the new version of the program against its existing test suite to validate that the changes made on the program did not introduce unintended side effects (i.e., regression faults). This kind of regression testing can be effective in identifying some regression faults, but it is limited by the quality of the existing test suite. Due to the cost of testing, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. As a result, these test suites tend to exercise only a small subset of the program's functionality and may be inadequate for testing the changes in a program. To address this issue, we propose a novel approach called Behavioral Regression Testing (BERT). Given two versions of a program, BERT identifies behavioral differences between the two versions through dynamical analysis, in three steps. First, it generates a large number of test inputs that focus on the changed parts of the code. Second, it runs the generated test inputs on the old and new versions of the code and identifies differences in the tests' behavior. Third, it analyzes the identified differences and presents them to the developers. By focusing on a subset of the code and leveraging differential behavior, BERT can provide developers with more (and more detailed) information than traditional regression testing techniques. To evaluate BERT, we implemented it as a plug-in for Eclipse, a popular Integrated Development Environment, and used the plug-in to perform a preliminary study on two programs. The results of our study are promising, in that BERT was able to identify true regression faults in the programs.
当一个程序在软件发展过程中被修改时,开发人员通常会针对它现有的测试套件运行程序的新版本,以验证对程序所做的更改没有引入意想不到的副作用(例如,回归错误)。这种类型的回归测试可以有效地识别一些回归错误,但是它受到现有测试套件质量的限制。由于测试的成本,开发人员通过在成本和测试的彻底性之间找到可接受的折衷来构建测试套件。因此,这些测试套件倾向于只执行程序功能的一小部分,并且可能不足以测试程序中的更改。为了解决这个问题,我们提出了一种新的方法,称为行为回归测试(BERT)。给定一个程序的两个版本,BERT通过动态分析识别两个版本之间的行为差异,分三个步骤。首先,它生成大量的测试输入,这些测试输入关注于代码的更改部分。其次,它在代码的新旧版本上运行生成的测试输入,并识别测试行为中的差异。第三,分析识别出的差异,并将其呈现给开发者。通过关注代码的子集并利用不同的行为,BERT可以为开发人员提供比传统回归测试技术更多(更详细)的信息。为了评估BERT,我们将其实现为Eclipse(一种流行的集成开发环境)的插件,并使用该插件对两个程序执行初步研究。我们的研究结果是有希望的,因为BERT能够识别程序中真正的回归错误。
{"title":"Automated Behavioral Regression Testing","authors":"Wei Jin, A. Orso, Tao Xie","doi":"10.1109/ICST.2010.64","DOIUrl":"https://doi.org/10.1109/ICST.2010.64","url":null,"abstract":"When a program is modified during software evolution, developers typically run the new version of the program against its existing test suite to validate that the changes made on the program did not introduce unintended side effects (i.e., regression faults). This kind of regression testing can be effective in identifying some regression faults, but it is limited by the quality of the existing test suite. Due to the cost of testing, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. As a result, these test suites tend to exercise only a small subset of the program's functionality and may be inadequate for testing the changes in a program. To address this issue, we propose a novel approach called Behavioral Regression Testing (BERT). Given two versions of a program, BERT identifies behavioral differences between the two versions through dynamical analysis, in three steps. First, it generates a large number of test inputs that focus on the changed parts of the code. Second, it runs the generated test inputs on the old and new versions of the code and identifies differences in the tests' behavior. Third, it analyzes the identified differences and presents them to the developers. By focusing on a subset of the code and leveraging differential behavior, BERT can provide developers with more (and more detailed) information than traditional regression testing techniques. To evaluate BERT, we implemented it as a plug-in for Eclipse, a popular Integrated Development Environment, and used the plug-in to perform a preliminary study on two programs. The results of our study are promising, in that BERT was able to identify true regression faults in the programs.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133703235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 92
Machine Learning Methods and Asymmetric Cost Function to Estimate Execution Effort of Software Testing 估算软件测试执行力的机器学习方法和非对称成本函数
Daniel Guerreiro e Silva, M. Jino, B. T. D. Abreu
Planning and scheduling of testing activities play an important role for any independent test team that performs tests for different software systems, developed by different development teams. This work studies the application of machine learning tools and variable selection tools to solve the problem of estimating the execution effort of functional tests. An analysis of the test execution process is developed and experiments are performed on two real databases. The main contributions of this paper are the approach of selecting the significant variables for database synthesis and the use of an artificial neural network trained with an asymmetric cost function.
对于任何独立的测试团队来说,为不同的软件系统(由不同的开发团队开发)执行测试,测试活动的计划和日程安排都扮演着重要的角色。本工作研究了机器学习工具和变量选择工具的应用,以解决估计功能测试执行工作量的问题。对测试执行过程进行了分析,并在两个实际数据库上进行了实验。本文的主要贡献是选择数据库合成的重要变量的方法以及使用非对称代价函数训练的人工神经网络。
{"title":"Machine Learning Methods and Asymmetric Cost Function to Estimate Execution Effort of Software Testing","authors":"Daniel Guerreiro e Silva, M. Jino, B. T. D. Abreu","doi":"10.1109/ICST.2010.46","DOIUrl":"https://doi.org/10.1109/ICST.2010.46","url":null,"abstract":"Planning and scheduling of testing activities play an important role for any independent test team that performs tests for different software systems, developed by different development teams. This work studies the application of machine learning tools and variable selection tools to solve the problem of estimating the execution effort of functional tests. An analysis of the test execution process is developed and experiments are performed on two real databases. The main contributions of this paper are the approach of selecting the significant variables for database synthesis and the use of an artificial neural network trained with an asymmetric cost function.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123681140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Regression Testing Ajax Applications: Coping with Dynamism 回归测试Ajax应用程序:应对动态性
D. Roest, A. Mesbah, A. Deursen
There is a growing trend to move desktop applications towards the web using advances made in web technologies such as Ajax. One common way to provide assurance about the correctness of such complex and evolving systems is through regression testing. Regression testing classical web applications has already been a notoriously daunting task because of the dynamism in web interfaces. Ajax applications pose an even greater challenge since the test case fragility degree is higher due to extensive run-time manipulation of the DOM tree and asynchronous client/server interactions. In this paper, we propose a technique, in which we automatically generate test cases and apply pipelined oracle comparators along with generated DOM templates, to deal with dynamic non-deterministic behavior in Ajax user interfaces. Our approach, implemented in Crawljax, is open source and provides a set of generic oracle comparators, template generators, and visualizations of test failure output. We describe two case studies evaluating the effectiveness, scalability, and required manual effort of the approach.
使用Ajax等先进的web技术将桌面应用程序转移到web上,这是一种日益增长的趋势。提供这种复杂的和不断发展的系统的正确性保证的一种常见方法是通过回归测试。由于web界面的动态性,对经典web应用程序进行回归测试已经是一项众所周知的艰巨任务。Ajax应用程序带来了更大的挑战,因为由于DOM树的大量运行时操作和异步客户机/服务器交互,测试用例的脆弱性程度更高。在本文中,我们提出了一种技术,在该技术中,我们自动生成测试用例并应用流水线oracle比较器以及生成的DOM模板,以处理Ajax用户界面中的动态不确定性行为。我们的方法是在Crawljax中实现的,它是开源的,提供了一组通用的oracle比较器、模板生成器和测试失败输出的可视化。我们描述了两个案例研究,以评估该方法的有效性、可伸缩性和所需的手工工作。
{"title":"Regression Testing Ajax Applications: Coping with Dynamism","authors":"D. Roest, A. Mesbah, A. Deursen","doi":"10.1109/ICST.2010.59","DOIUrl":"https://doi.org/10.1109/ICST.2010.59","url":null,"abstract":"There is a growing trend to move desktop applications towards the web using advances made in web technologies such as Ajax. One common way to provide assurance about the correctness of such complex and evolving systems is through regression testing. Regression testing classical web applications has already been a notoriously daunting task because of the dynamism in web interfaces. Ajax applications pose an even greater challenge since the test case fragility degree is higher due to extensive run-time manipulation of the DOM tree and asynchronous client/server interactions. In this paper, we propose a technique, in which we automatically generate test cases and apply pipelined oracle comparators along with generated DOM templates, to deal with dynamic non-deterministic behavior in Ajax user interfaces. Our approach, implemented in Crawljax, is open source and provides a set of generic oracle comparators, template generators, and visualizations of test failure output. We describe two case studies evaluating the effectiveness, scalability, and required manual effort of the approach.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"381 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Specification of UML Model Transformations UML模型转换的说明
Shekoufeh Kolahdouz Rahimi
The purpose of our research is to evaluate and compare different approaches for the specification, verification and implementation of model transformations, and to make recommendations for a transformation specification language which is modular, verifiable, and supports reuse and implementation. In this paper we survey existing approaches to model transformations and propose a new specification and implementation approach for transformations. We describe case studies, of state machine slicing, and re-architecting systems for achieving quality of service in service-oriented architectures, which are used to evaluate model transformation specification approaches and languages.
我们研究的目的是评估和比较模型转换的规范、验证和实现的不同方法,并为模块化、可验证、支持重用和实现的转换规范语言提出建议。在本文中,我们概述了现有的模型转换方法,并提出了一种新的转换规范和实现方法。我们描述了用于在面向服务的体系结构中实现服务质量的状态机切片和重新架构系统的案例研究,这些系统用于评估模型转换规范方法和语言。
{"title":"Specification of UML Model Transformations","authors":"Shekoufeh Kolahdouz Rahimi","doi":"10.1109/ICST.2010.31","DOIUrl":"https://doi.org/10.1109/ICST.2010.31","url":null,"abstract":"The purpose of our research is to evaluate and compare different approaches for the specification, verification and implementation of model transformations, and to make recommendations for a transformation specification language which is modular, verifiable, and supports reuse and implementation. In this paper we survey existing approaches to model transformations and propose a new specification and implementation approach for transformations. We describe case studies, of state machine slicing, and re-architecting systems for achieving quality of service in service-oriented architectures, which are used to evaluate model transformation specification approaches and languages.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125251488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Holistic Model-Based Testing for Business Information Systems 基于整体模型的商业信息系统测试
M. Mlynarski
Growing complexity of today’s software development requires new and better techniques in software testing. A promising one seems to be model-based testing. The goal is to automatically generate test artefacts from models, improve test coverage and guarantee traceability. Typical problems are missing reuse of design models and test case explosion. Our research work aims to find a solution for the mentioned problems in the area of UML and Business Information Systems. We use model transformations to automatically generate test models from manually annotated design models using a holistic view. In this paper we define and justify the research problem and present first results.
当今日益复杂的软件开发需要新的和更好的软件测试技术。一个很有前途的方法似乎是基于模型的测试。目标是从模型中自动生成测试工件,提高测试覆盖率并保证可追溯性。典型的问题是缺少设计模型的重用和测试用例的爆炸。我们的研究工作旨在为UML和业务信息系统领域的上述问题找到一个解决方案。我们使用模型转换来使用整体视图从手动注释的设计模型自动生成测试模型。在本文中,我们定义和证明了研究问题,并提出了初步结果。
{"title":"Holistic Model-Based Testing for Business Information Systems","authors":"M. Mlynarski","doi":"10.1109/ICST.2010.35","DOIUrl":"https://doi.org/10.1109/ICST.2010.35","url":null,"abstract":"Growing complexity of today’s software development requires new and better techniques in software testing. A promising one seems to be model-based testing. The goal is to automatically generate test artefacts from models, improve test coverage and guarantee traceability. Typical problems are missing reuse of design models and test case explosion. Our research work aims to find a solution for the mentioned problems in the area of UML and Business Information Systems. We use model transformations to automatically generate test models from manually annotated design models using a holistic view. In this paper we define and justify the research problem and present first results.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114170921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Formal Model for Generating Integrated Functional and User Interface Test Cases 生成集成功能和用户界面测试用例的形式化模型
D. Sinnig, F. Khendek, Patrice Chalin
Black box testing focuses on the core functionality of the system, while user interface testing is concerned with details of user interactions. Functional and user interface test cases are usually generated from two distinct system models, one for the functionality and one for the user interface. As a result, test cases derived from either model capture only partial system behavior and as such, are inadequate for testing full system behavior. We propose a method for formally integrating the model for the system functionality and the model for the user interface. The resulting composite model is then used to generate more complete test cases, capturing detailed user interactions as well as secondary system interactions. In this paper we employ use cases for modeling system functionality, and task models for describing user interfaces.
黑盒测试关注系统的核心功能,而用户界面测试关注用户交互的细节。功能和用户界面测试用例通常是从两个不同的系统模型中生成的,一个用于功能,一个用于用户界面。结果,来自任何一个模型的测试用例只捕获部分系统行为,因此,对于测试完整的系统行为是不够的。我们提出了一种正式集成系统功能模型和用户界面模型的方法。然后使用生成的组合模型来生成更完整的测试用例,捕获详细的用户交互以及次要的系统交互。在本文中,我们使用用例对系统功能建模,并使用任务模型来描述用户界面。
{"title":"A Formal Model for Generating Integrated Functional and User Interface Test Cases","authors":"D. Sinnig, F. Khendek, Patrice Chalin","doi":"10.1109/ICST.2010.56","DOIUrl":"https://doi.org/10.1109/ICST.2010.56","url":null,"abstract":"Black box testing focuses on the core functionality of the system, while user interface testing is concerned with details of user interactions. Functional and user interface test cases are usually generated from two distinct system models, one for the functionality and one for the user interface. As a result, test cases derived from either model capture only partial system behavior and as such, are inadequate for testing full system behavior. We propose a method for formally integrating the model for the system functionality and the model for the user interface. The resulting composite model is then used to generate more complete test cases, capturing detailed user interactions as well as secondary system interactions. In this paper we employ use cases for modeling system functionality, and task models for describing user interfaces.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126015520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Fully Automated Test Management for Large Complex Systems 迈向大型复杂系统的全自动测试管理
Sigrid Eldh, Joachim Brandt, M. Street, H. Hansson, S. Punnekkat
Development of large and complex software intensive systems with continuous builds typically generates large volumes of information with complex patterns and relations. Systematic and automated approaches are needed for efficient handling of such large quantities of data in a comprehensible way. In this paper we present an approach and tool enabling autonomous behavior in an automated test management tool to gain efficiency in concurrent software development and test. By capturing the required quality criteria in the test specifications and automating the test execution, test management can potentially be performed to a great extent without manual intervention. This work contributes towards a more autonomous behavior within a distributed remote test strategy based on metrics for decision making in automated testing. These metrics optimize management of fault corrections and retest, giving consideration to the impact of the identified weaknesses, such as fault-prone areas in software.
持续构建的大型复杂软件密集型系统的开发通常会生成具有复杂模式和关系的大量信息。为了以可理解的方式有效地处理如此大量的数据,需要系统和自动化的方法。在本文中,我们提出了一种在自动化测试管理工具中实现自治行为的方法和工具,以提高并行软件开发和测试的效率。通过在测试规范中获取所需的质量标准并自动化测试执行,可以在没有人工干预的情况下在很大程度上执行测试管理。这项工作有助于在基于自动化测试中决策制定的度量的分布式远程测试策略中实现更加自治的行为。这些度量优化了对错误纠正和重新测试的管理,考虑了已识别的弱点的影响,例如软件中容易出错的区域。
{"title":"Towards Fully Automated Test Management for Large Complex Systems","authors":"Sigrid Eldh, Joachim Brandt, M. Street, H. Hansson, S. Punnekkat","doi":"10.1109/ICST.2010.58","DOIUrl":"https://doi.org/10.1109/ICST.2010.58","url":null,"abstract":"Development of large and complex software intensive systems with continuous builds typically generates large volumes of information with complex patterns and relations. Systematic and automated approaches are needed for efficient handling of such large quantities of data in a comprehensible way. In this paper we present an approach and tool enabling autonomous behavior in an automated test management tool to gain efficiency in concurrent software development and test. By capturing the required quality criteria in the test specifications and automating the test execution, test management can potentially be performed to a great extent without manual intervention. This work contributes towards a more autonomous behavior within a distributed remote test strategy based on metrics for decision making in automated testing. These metrics optimize management of fault corrections and retest, giving consideration to the impact of the identified weaknesses, such as fault-prone areas in software.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124425695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
2010 Third International Conference on Software Testing, Verification and Validation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1