In this case study we test a landing gear control system of a military aircraft with the new version of Lutess, a tool for testing automatically synchronous software. Lutess requires the tester to specify the environment of the software under test by means of invariant properties in order to guide the test data generation. This specification can be enriched by operational profile specification in order to obtain more realistic scenarios. Moreover, test generation guided by safety properties makes possible to test more thoroughly the key features of the software, possibly under hypotheses on the software behavior. In this case, the generator chooses input data which are able to violate the properties. The new version of Lutess is based on constraint logic programming and provides some additional features (numeric inputs and outputs, hypotheses for safety guided testing, more powerful operational profiles). In this paper, we present the necessary steps for building the test model for Lutess on a real case study from the avionics. This makes possible to better understand the applicability of the approach and to assess the difficulty and the effectiveness of such a process in real-world applications.
{"title":"Towards a Testing Methodology for Reactive Systems: A Case Study of a Landing Gear Controller","authors":"L. Madani, V. Papailiopoulou, I. Parissis","doi":"10.1109/ICST.2010.21","DOIUrl":"https://doi.org/10.1109/ICST.2010.21","url":null,"abstract":"In this case study we test a landing gear control system of a military aircraft with the new version of Lutess, a tool for testing automatically synchronous software. Lutess requires the tester to specify the environment of the software under test by means of invariant properties in order to guide the test data generation. This specification can be enriched by operational profile specification in order to obtain more realistic scenarios. Moreover, test generation guided by safety properties makes possible to test more thoroughly the key features of the software, possibly under hypotheses on the software behavior. In this case, the generator chooses input data which are able to violate the properties. The new version of Lutess is based on constraint logic programming and provides some additional features (numeric inputs and outputs, hypotheses for safety guided testing, more powerful operational profiles). In this paper, we present the necessary steps for building the test model for Lutess on a real case study from the avionics. This makes possible to better understand the applicability of the approach and to assess the difficulty and the effectiveness of such a process in real-world applications.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132447642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developers often make multiple changes to software. These changes are introduced to work cooperatively or to accomplish separate goals. However, changes might not interact as expected or may produce undesired side effects. Thus, it is crucial for software-development tasks to know exactly which changes interact. For example, testers need this information to ensure that regression test suites test the combined behaviors of changes. For another example, teams of developers must determine whether it is safe to merge variants of a program modified in parallel. Existing techniques can be used to detect at runtime potential interactions among changes, but these reports tend to be coarse and imprecise. To address this problem, in this paper, we first present a formal model of change interactions at the code level, and then describe a new technique, based on this model, for detecting at runtime such interactions with accuracy. We also present the results of a comparison of our technique with other techniques on a set of Java subjects. Our results clearly suggest that existing techniques are too inaccurate and only our technique, of all those studied, provides acceptable confidence in detecting real change interactions occurring at runtime.
{"title":"Precisely Detecting Runtime Change Interactions for Evolving Software","authors":"Raúl A. Santelices, M. J. Harrold, A. Orso","doi":"10.1109/ICST.2010.29","DOIUrl":"https://doi.org/10.1109/ICST.2010.29","url":null,"abstract":"Developers often make multiple changes to software. These changes are introduced to work cooperatively or to accomplish separate goals. However, changes might not interact as expected or may produce undesired side effects. Thus, it is crucial for software-development tasks to know exactly which changes interact. For example, testers need this information to ensure that regression test suites test the combined behaviors of changes. For another example, teams of developers must determine whether it is safe to merge variants of a program modified in parallel. Existing techniques can be used to detect at runtime potential interactions among changes, but these reports tend to be coarse and imprecise. To address this problem, in this paper, we first present a formal model of change interactions at the code level, and then describe a new technique, based on this model, for detecting at runtime such interactions with accuracy. We also present the results of a comparison of our technique with other techniques on a set of Java subjects. Our results clearly suggest that existing techniques are too inaccurate and only our technique, of all those studied, provides acceptable confidence in detecting real change interactions occurring at runtime.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"38 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130811423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A random testing strategy can be effective at finding faults, but may leave some routines entirely untested if it never gets to call them on objects satisfying their preconditions. This limitation is particularly frustrating if the object pool does contain some precondition-satisfying objects but the strategy, which selects objects at random, does not use them. The extension of random testing described in this article addresses the problem. Experimentally, the resulting strategy succeeds in testing 56% of the routines that the pure random strategy missed; it tests hard routines 3.6 times more often; although it misses some of the faults detected by the original strategy, it finds 9.5% more faults overall; and it causes negligible overhead.
{"title":"Satisfying Test Preconditions through Guided Object Selection","authors":"Yi Wei, S. Gebhardt, B. Meyer, M. Oriol","doi":"10.1109/ICST.2010.34","DOIUrl":"https://doi.org/10.1109/ICST.2010.34","url":null,"abstract":"A random testing strategy can be effective at finding faults, but may leave some routines entirely untested if it never gets to call them on objects satisfying their preconditions. This limitation is particularly frustrating if the object pool does contain some precondition-satisfying objects but the strategy, which selects objects at random, does not use them. The extension of random testing described in this article addresses the problem. Experimentally, the resulting strategy succeeds in testing 56% of the routines that the pure random strategy missed; it tests hard routines 3.6 times more often; although it misses some of the faults detected by the original strategy, it finds 9.5% more faults overall; and it causes negligible overhead.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131885957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intermittent failures and nondeterministic behavior complicate and compromise the effectiveness of software testing and debugging. To increase the observability of software faults, we explore the effect hardware configurations and processor load have on intermittent failures and the nondeterministic behavior of software systems. We conducted a case study on Mozilla Firefox with a selected set of reported field failures. We replicated the conditions that caused the reported failures ten times on each of nine hardware configurations by varying processor speed, memory, hard drive capacity, and processor load. Using several observability tools, we found that hardware configurations that had less processor speed and memory observed more failures than others. Our results also show that by manipulating processor load, we can influence the observability of some faults.
{"title":"Does Hardware Configuration and Processor Load Impact Software Fault Observability?","authors":"R. Syed, Brian P. Robinson, L. Williams","doi":"10.1109/ICST.2010.55","DOIUrl":"https://doi.org/10.1109/ICST.2010.55","url":null,"abstract":"Intermittent failures and nondeterministic behavior complicate and compromise the effectiveness of software testing and debugging. To increase the observability of software faults, we explore the effect hardware configurations and processor load have on intermittent failures and the nondeterministic behavior of software systems. We conducted a case study on Mozilla Firefox with a selected set of reported field failures. We replicated the conditions that caused the reported failures ten times on each of nine hardware configurations by varying processor speed, memory, hard drive capacity, and processor load. Using several observability tools, we found that hardware configurations that had less processor speed and memory observed more failures than others. Our results also show that by manipulating processor load, we can influence the observability of some faults.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130777746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
UML state machines are widely used as test models in model-based testing. Coverage criteria are applied to them, e.g. to measure a test suite's coverage of the state machine or to steer automatic test suite generation based on the state machine. The model elements to cover as described by the applied coverage criterion depend on the structure of the state machine. Model transformations can be used to change this structure. In this paper, we present semantic-preserving state machine transformations that are used to influence the result of the applied coverage criteria. The contribution is that almost every feasible coverage criterion that is applied to the transformed state machine can have at least the same effect as any other feasible, possibly stronger coverage criterion that is applied to the original state machine. We introduce simulated satisfaction as a corresponding relation between coverage criteria. We provide formal definitions for coverage criteria and use them to prove the correctness of the model transformations that substantiate the simulated satisfaction relations. The results of this paper are especially important for model-based test generation tools, which are often limited to satisfy a restricted set of coverage criteria.
{"title":"Simulated Satisfaction of Coverage Criteria on UML State Machines","authors":"Stephan Weißleder","doi":"10.1109/ICST.2010.28","DOIUrl":"https://doi.org/10.1109/ICST.2010.28","url":null,"abstract":"UML state machines are widely used as test models in model-based testing. Coverage criteria are applied to them, e.g. to measure a test suite's coverage of the state machine or to steer automatic test suite generation based on the state machine. The model elements to cover as described by the applied coverage criterion depend on the structure of the state machine. Model transformations can be used to change this structure. In this paper, we present semantic-preserving state machine transformations that are used to influence the result of the applied coverage criteria. The contribution is that almost every feasible coverage criterion that is applied to the transformed state machine can have at least the same effect as any other feasible, possibly stronger coverage criterion that is applied to the original state machine. We introduce simulated satisfaction as a corresponding relation between coverage criteria. We provide formal definitions for coverage criteria and use them to prove the correctness of the model transformations that substantiate the simulated satisfaction relations. The results of this paper are especially important for model-based test generation tools, which are often limited to satisfy a restricted set of coverage criteria.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130283093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Segura, R. Hierons, David Benavides, Antonio Ruiz-Cortés
A Feature Model (FM) is a compact representation of all the products of a software product line. The automated extraction of information from FMs is a thriving research topic involving a number of analysis operations, algorithms, paradigms and tools. Implementing these operations is far from trivial and easily leads to errors and defects in analysis solutions. Current testing methods in this context mainly rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses. In this paper, we present a set of relations (so-called metamorphic relations) between input FMs and their set of products and a test data generator relying on them. Given an FM and its known set of products, a set of neighbour FMs together with their corresponding set of products are automatically generated and used for testing different analyses. Complex FMs representing millions of products can be efficiently created applying this process iteratively. The evaluation of our approach using mutation testing as well as real faults and tools reveals that most faults can be automatically detected within a few seconds.
{"title":"Automated Test Data Generation on the Analyses of Feature Models: A Metamorphic Testing Approach","authors":"Sergio Segura, R. Hierons, David Benavides, Antonio Ruiz-Cortés","doi":"10.1109/ICST.2010.20","DOIUrl":"https://doi.org/10.1109/ICST.2010.20","url":null,"abstract":"A Feature Model (FM) is a compact representation of all the products of a software product line. The automated extraction of information from FMs is a thriving research topic involving a number of analysis operations, algorithms, paradigms and tools. Implementing these operations is far from trivial and easily leads to errors and defects in analysis solutions. Current testing methods in this context mainly rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses. In this paper, we present a set of relations (so-called metamorphic relations) between input FMs and their set of products and a test data generator relying on them. Given an FM and its known set of products, a set of neighbour FMs together with their corresponding set of products are automatically generated and used for testing different analyses. Complex FMs representing millions of products can be efficiently created applying this process iteratively. The evaluation of our approach using mutation testing as well as real faults and tools reveals that most faults can be automatically detected within a few seconds.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132671693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Widespread adoption of model-centric development has created opportunities for software testing, with Model-Based Testing (MBT). MBT supports the generation of test cases from models and the demonstration of model and source-code compliance. Models evolve, much like source code. Thus, an important activity of MBT is selective regression testing, which selects test cases for retest based on model modifications, rather than source-code modifications. This activity explores relationships between model elements and test cases that traverse those elements to locate retest able test cases. We contribute an approach and prototype to model-based selective regression testing, whereby fine-grain traceability relationships among entities in models and test cases are persisted into a traceability infrastructure throughout the test generation process: the relationships represent reasons for test case creation and are used to select test cases for re-run. The approach builds upon existing regression test selection techniques and adopts scenarios as behavioral modeling perspective. We analyze precision, efficiency and safety of the approach through case studies and through theoretical and intuitive reasoning.
{"title":"MbSRT2: Model-Based Selective Regression Testing with Traceability","authors":"L. Naslavsky, H. Ziv, D. Richardson","doi":"10.1109/ICST.2010.61","DOIUrl":"https://doi.org/10.1109/ICST.2010.61","url":null,"abstract":"Widespread adoption of model-centric development has created opportunities for software testing, with Model-Based Testing (MBT). MBT supports the generation of test cases from models and the demonstration of model and source-code compliance. Models evolve, much like source code. Thus, an important activity of MBT is selective regression testing, which selects test cases for retest based on model modifications, rather than source-code modifications. This activity explores relationships between model elements and test cases that traverse those elements to locate retest able test cases. We contribute an approach and prototype to model-based selective regression testing, whereby fine-grain traceability relationships among entities in models and test cases are persisted into a traceability infrastructure throughout the test generation process: the relationships represent reasons for test case creation and are used to select test cases for re-run. The approach builds upon existing regression test selection techniques and adopts scenarios as behavioral modeling perspective. We analyze precision, efficiency and safety of the approach through case studies and through theoretical and intuitive reasoning.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116510754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Panesar-Walawege, M. Sabetzadeh, L. Briand, T. Coq
Increasingly, licensing and safety regulatory bodies require the suppliers of software-intensive, safety-critical systems to provide an explicit software safety case – a structured set of arguments based on objective evidence to demonstrate that the software elements of a system are acceptably safe. Existing research on safety cases has mainly focused on how to build the arguments in a safety case based on available evidence; but little has been done to precisely characterize what this evidence should be. As a result, system suppliers are left with practically no guidance on what evidence to collect during software development. This has led to the suppliers having to recover the relevant evidence after the fact – an extremely costly and sometimes impractical task. Although standards such as the IEC 61508 – which is widely viewed as the best available generic standard for managing functional safety in software – provide some guidance for the collection of relevant safety and certification information, this guidance is mostly textual, not expressed in a precise and structured form, and is not easy to specialize to context-specific needs. To address these issues, we present a conceptual model to characterize the evidence for arguing about software safety. Our model captures both the information requirements for demonstrating compliance with IEC 61508 and the traceability links necessary to create a seamless chain of evidence. We further describe how our generic model can be specialized according to the needs of a particular context, and discuss some important ways in which our model can facilitate software certification.
{"title":"Characterizing the Chain of Evidence for Software Safety Cases: A Conceptual Model Based on the IEC 61508 Standard","authors":"R. Panesar-Walawege, M. Sabetzadeh, L. Briand, T. Coq","doi":"10.1109/ICST.2010.12","DOIUrl":"https://doi.org/10.1109/ICST.2010.12","url":null,"abstract":"Increasingly, licensing and safety regulatory bodies require the suppliers of software-intensive, safety-critical systems to provide an explicit software safety case – a structured set of arguments based on objective evidence to demonstrate that the software elements of a system are acceptably safe. Existing research on safety cases has mainly focused on how to build the arguments in a safety case based on available evidence; but little has been done to precisely characterize what this evidence should be. As a result, system suppliers are left with practically no guidance on what evidence to collect during software development. This has led to the suppliers having to recover the relevant evidence after the fact – an extremely costly and sometimes impractical task. Although standards such as the IEC 61508 – which is widely viewed as the best available generic standard for managing functional safety in software – provide some guidance for the collection of relevant safety and certification information, this guidance is mostly textual, not expressed in a precise and structured form, and is not easy to specialize to context-specific needs. To address these issues, we present a conceptual model to characterize the evidence for arguing about software safety. Our model captures both the information requirements for demonstrating compliance with IEC 61508 and the traceability links necessary to create a seamless chain of evidence. We further describe how our generic model can be specialized according to the needs of a particular context, and discuss some important ways in which our model can facilitate software certification.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122046980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context: Software product lines (SPL) are used in industry to achieve more efficient software development. To test a SPL is complex and costly and often becomes a bottleneck in the product line organization. Objective: This research aims to develop and evaluate strategies for improving system test selection in a SPL. Method: Initially industrial practices and research in both SPL testing and traditional regression test selection have been surveyed. Two systematic literature reviews, two industrial exploratory surveys and one industrial evaluation of a pragmatic test selection approach have been conducted. Results: There is a lack of industrial evaluations as well as of useful solutions, both regarding regression test selection and SPL testing. Test selection is an activity of varying scope and preconditions, strongly dependent on the context in which it is applied. Conclusions: Continued research will be done in close cooperation with industry with the goal to define a tool for visualizing system test coverage in a product line and the delta between a product and the covered part of the product line.
{"title":"Regression Test Selection and Product Line System Testing","authors":"Emelie Engström","doi":"10.1109/ICST.2010.45","DOIUrl":"https://doi.org/10.1109/ICST.2010.45","url":null,"abstract":"Context: Software product lines (SPL) are used in industry to achieve more efficient software development. To test a SPL is complex and costly and often becomes a bottleneck in the product line organization. Objective: This research aims to develop and evaluate strategies for improving system test selection in a SPL. Method: Initially industrial practices and research in both SPL testing and traditional regression test selection have been surveyed. Two systematic literature reviews, two industrial exploratory surveys and one industrial evaluation of a pragmatic test selection approach have been conducted. Results: There is a lack of industrial evaluations as well as of useful solutions, both regarding regression test selection and SPL testing. Test selection is an activity of varying scope and preconditions, strongly dependent on the context in which it is applied. Conclusions: Continued research will be done in close cooperation with industry with the goal to define a tool for visualizing system test coverage in a product line and the delta between a product and the covered part of the product line.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122153088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A UML protocol state machine describes a behavioral interface for a class as a number of states and transitions between states triggered by method calls. In this paper, we present an approach to generate behavioral class interfaces in the form of class contracts from UML protocol state machines. The generated contracts can be used for documentation, test case generation, test case oracle, and as run-time assertions and thus help to test and validate the implementation of a class against its interface. We formalize protocol state machines with its structure and semantics for generating class contracts. The state invariants of the source and target states are considered along with the pre- and post-conditions of the transitions. Different types of transitions like simple, join, fork, high-level, and self transitions are supported, as well as non-deterministic behavior. The approach is supported by a tool to generate automatically the contracts from UML models.
{"title":"From Nondeterministic UML Protocol Statemachines to Class Contracts","authors":"Ivan Porres, I. Rauf","doi":"10.1109/ICST.2010.62","DOIUrl":"https://doi.org/10.1109/ICST.2010.62","url":null,"abstract":"A UML protocol state machine describes a behavioral interface for a class as a number of states and transitions between states triggered by method calls. In this paper, we present an approach to generate behavioral class interfaces in the form of class contracts from UML protocol state machines. The generated contracts can be used for documentation, test case generation, test case oracle, and as run-time assertions and thus help to test and validate the implementation of a class against its interface. We formalize protocol state machines with its structure and semantics for generating class contracts. The state invariants of the source and target states are considered along with the pre- and post-conditions of the transitions. Different types of transitions like simple, join, fork, high-level, and self transitions are supported, as well as non-deterministic behavior. The approach is supported by a tool to generate automatically the contracts from UML models.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131553089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}