A tradeoff exists in software logic testing between test set size and fault detection. Testers may want to minimize test set size subject to guaranteeing fault detection or they may want to maximize faults detection subject to a test set size. One way to guarantee fault detection is to use heuristics to produce tests that satisfy logic criteria. Some logic criteria have the property that they are satisfied by a test set if detection of certain faults is guaranteed by that test set. An empirical study is conducted to compare test set size and computation time for heuristics and optimization for various faults and criteria. The results show that optimization is a better choice for applications where each test has significant cost, because for a small difference in computation time, optimization reduces test set size. A second empirical study examined the percentage of faults detected in a best, random, and worst case, first for a test set size of one and then again for a test set size of ten. This study showed that if you have a limited number of tests from which to choose, the exact tests you choose have a large impact on fault detection.
{"title":"Applications of Optimization to Logic Testing","authors":"Garrett Kent Kaminski, P. Ammann","doi":"10.1109/ICSTW.2010.49","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.49","url":null,"abstract":"A tradeoff exists in software logic testing between test set size and fault detection. Testers may want to minimize test set size subject to guaranteeing fault detection or they may want to maximize faults detection subject to a test set size. One way to guarantee fault detection is to use heuristics to produce tests that satisfy logic criteria. Some logic criteria have the property that they are satisfied by a test set if detection of certain faults is guaranteed by that test set. An empirical study is conducted to compare test set size and computation time for heuristics and optimization for various faults and criteria. The results show that optimization is a better choice for applications where each test has significant cost, because for a small difference in computation time, optimization reduces test set size. A second empirical study examined the percentage of faults detected in a best, random, and worst case, first for a test set size of one and then again for a test set size of ten. This study showed that if you have a limited number of tests from which to choose, the exact tests you choose have a large impact on fault detection.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122745979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As our awareness of the complexities inherent in web applications grows, we find an increasing need for more sophisticated ways to test them. Many web application faults are a result of how web software components interact; sometimes client-server and sometimes server-server. This paper presents a novel solution to the problem of integration testing of web applications by using mutation analysis. New mutation operators are defined, a tool (webMuJava) that implements these operators is presented, and results from a case study applying the tool to test a small web application are presented. The results show that mutation analysis can help create tests that are effective at finding web application faults, as well as indicating several directions for improvement.
{"title":"Applying Mutation Testing to Web Applications","authors":"Upsorn Praphamontripong, J. Offutt","doi":"10.1109/ICSTW.2010.38","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.38","url":null,"abstract":"As our awareness of the complexities inherent in web applications grows, we find an increasing need for more sophisticated ways to test them. Many web application faults are a result of how web software components interact; sometimes client-server and sometimes server-server. This paper presents a novel solution to the problem of integration testing of web applications by using mutation analysis. New mutation operators are defined, a tool (webMuJava) that implements these operators is presented, and results from a case study applying the tool to test a small web application are presented. The results show that mutation analysis can help create tests that are effective at finding web application faults, as well as indicating several directions for improvement.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132258707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerous software vulnerabilities can be activated only with dedicated user inputs. Taint analysis is a security check which consists in looking for possible dependency chains between user inputs and vulnerable statements (like array accesses). Most of the existing static taint analysis tools produce some warnings on potentially vulnerable program locations. It is then up to the developer to analyze these results by scanning the possible execution paths that may lead to these locations with unsecured user inputs. We present a Taint Dependency Sequences Calculus, based on a fine-grain data and control taint analysis, that aims to help the developer in this task by providing some information on the set of paths that need to be analyzed. Following some ideas introduced in [1], [2], we also propose some metrics to characterize these paths in term of "dangerousness". This approach is illustrated with the help of the Verisec Suite [3] and by describing a prototype, called STAC.
{"title":"Taint Dependency Sequences: A Characterization of Insecure Execution Paths Based on Input-Sensitive Cause Sequences","authors":"Dumitru Ceara, L. Mounier, Marie-Laure Potet","doi":"10.1109/ICSTW.2010.28","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.28","url":null,"abstract":"Numerous software vulnerabilities can be activated only with dedicated user inputs. Taint analysis is a security check which consists in looking for possible dependency chains between user inputs and vulnerable statements (like array accesses). Most of the existing static taint analysis tools produce some warnings on potentially vulnerable program locations. It is then up to the developer to analyze these results by scanning the possible execution paths that may lead to these locations with unsecured user inputs. We present a Taint Dependency Sequences Calculus, based on a fine-grain data and control taint analysis, that aims to help the developer in this task by providing some information on the set of paths that need to be analyzed. Following some ideas introduced in [1], [2], we also propose some metrics to characterize these paths in term of \"dangerousness\". This approach is illustrated with the help of the Verisec Suite [3] and by describing a prototype, called STAC.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"33 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116717605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous systems are increasingly conceived as a means to allow operation in changeable or poorly understood environments. However, granting a system autonomy over its operation removes the ability of the developer to be completely sure of the system's behaviour under all operating contexts. This combination of environmental and behavioural uncertainty makes the achievement of assurance through testing very problematic. This paper focuses on a class of system, called an m-DAS, that uses run-time models to drive run-time adaptations in changing environmental conditions. We propose a testing approach which is itself model-driven, using model analysis to significantly reduce the set of test cases needed to test for emergent behaviour. Limited testing resources may therefore be prioritised for the most likely scenarios in which emergent behaviour may be observed.
{"title":"Managing Testing Complexity in Dynamically Adaptive Systems: A Model-Driven Approach","authors":"Kristopher Welsh, P. Sawyer","doi":"10.1109/ICSTW.2010.57","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.57","url":null,"abstract":"Autonomous systems are increasingly conceived as a means to allow operation in changeable or poorly understood environments. However, granting a system autonomy over its operation removes the ability of the developer to be completely sure of the system's behaviour under all operating contexts. This combination of environmental and behavioural uncertainty makes the achievement of assurance through testing very problematic. This paper focuses on a class of system, called an m-DAS, that uses run-time models to drive run-time adaptations in changing environmental conditions. We propose a testing approach which is itself model-driven, using model analysis to significantly reduce the set of test cases needed to test for emergent behaviour. Limited testing resources may therefore be prioritised for the most likely scenarios in which emergent behaviour may be observed.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129255607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WS-BPEL is an OASIS standard language used for describing interactions in Service Oriented Architectures (SOA). BPEL processes are usually overlapped in large Business applications composed of web services and such applications are more and more developed with respect of quality processes. Testability, which is the topic of this paper, is a quality criterion devoted for testing activities which evaluates the test coverage and the testing cost. We study the BPEL testability on two well-known testability criteria, observability and controllability. To evaluate them, we propose to transform ABPEL specifications into STS and to apply existing methods. Then, from STS testability issues, we deduce some patterns of ABPEL testability degradation. These latter help to finally propose testability enhancement methods of ABPEL specifications.
{"title":"A Preliminary Study on BPEL Process Testability","authors":"S. Salva, I. Rabhi","doi":"10.1109/ICSTW.2010.14","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.14","url":null,"abstract":"WS-BPEL is an OASIS standard language used for describing interactions in Service Oriented Architectures (SOA). BPEL processes are usually overlapped in large Business applications composed of web services and such applications are more and more developed with respect of quality processes. Testability, which is the topic of this paper, is a quality criterion devoted for testing activities which evaluates the test coverage and the testing cost. We study the BPEL testability on two well-known testability criteria, observability and controllability. To evaluate them, we propose to transform ABPEL specifications into STS and to apply existing methods. Then, from STS testability issues, we deduce some patterns of ABPEL testability degradation. These latter help to finally propose testability enhancement methods of ABPEL specifications.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123146528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current GUI testing approaches validate the functional correctness of interactive applications, but they neglect an important non-functional quality: performance. With the growing complexity of interactive applications, and with their gradual migration to resource-constrained devices such as smartphones, their performance as perceived by the human user is growing in importance. In this paper we propose to broaden the goal of GUI testing to include the validation of perceptible performance in addition to functional correctness.
{"title":"Performance Testing of GUI Applications","authors":"M. Jovic, Matthias Hauswirth","doi":"10.1109/ICSTW.2010.27","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.27","url":null,"abstract":"Current GUI testing approaches validate the functional correctness of interactive applications, but they neglect an important non-functional quality: performance. With the growing complexity of interactive applications, and with their gradual migration to resource-constrained devices such as smartphones, their performance as perceived by the human user is growing in importance. In this paper we propose to broaden the goal of GUI testing to include the validation of perceptible performance in addition to functional correctness.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130186575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Testing stochastic optimisation algorithms presents an unique challenge because of two reasons. First, these algorithms are non-testable programs, i.e. if the test oracle was known, there wouldn't have been the need for those algorithms in the first place. Second, their performance can vary depending on the problem instances they are used to solve. This paper applies the statistical metamorphic testing approach to stochastic optimisation algorithms and investigates the impact that different problem instances have on testing optimisation algorithms. The paper presents an empirical evaluation of the approach using instances of Next Release Problem (NRP). The effectiveness of the testing method is evaluated using mutation testing. The result shows that, despite the challenges from the stochastic nature of the optimisation algorithm, metamorphic testing can be effective in testing them.
{"title":"Metamorphic Testing of Stochastic Optimisation","authors":"S. Yoo","doi":"10.1109/ICSTW.2010.26","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.26","url":null,"abstract":"Testing stochastic optimisation algorithms presents an unique challenge because of two reasons. First, these algorithms are non-testable programs, i.e. if the test oracle was known, there wouldn't have been the need for those algorithms in the first place. Second, their performance can vary depending on the problem instances they are used to solve. This paper applies the statistical metamorphic testing approach to stochastic optimisation algorithms and investigates the impact that different problem instances have on testing optimisation algorithms. The paper presents an empirical evaluation of the approach using instances of Next Release Problem (NRP). The effectiveness of the testing method is evaluated using mutation testing. The result shows that, despite the challenges from the stochastic nature of the optimisation algorithm, metamorphic testing can be effective in testing them.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127661246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The York Extensible Testing Infrastructure (YETI) is an automated random testing tool that allows to test programs written in various programming languages. While YETI is one of the fastest random testing tools with over a million method calls per minute on fast code, testing large programs or slow code -- such as libraries using intensively the memory -- might benefit from parallel executions of testing sessions. This paper presents the cloud-enabled version of YETI. It relies on the Hadoop package and its map/reduce implementation to distribute tasks over potentially many computers. This would allow to distribute the cloud version of YETI over Amazon's Elastic Compute Cloud (EC2).
{"title":"YETI on the Cloud","authors":"M. Oriol, Faheem Ullah","doi":"10.1109/ICSTW.2010.68","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.68","url":null,"abstract":"The York Extensible Testing Infrastructure (YETI) is an automated random testing tool that allows to test programs written in various programming languages. While YETI is one of the fastest random testing tools with over a million method calls per minute on fast code, testing large programs or slow code -- such as libraries using intensively the memory -- might benefit from parallel executions of testing sessions. This paper presents the cloud-enabled version of YETI. It relies on the Hadoop package and its map/reduce implementation to distribute tasks over potentially many computers. This would allow to distribute the cloud version of YETI over Amazon's Elastic Compute Cloud (EC2).","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"320 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121024067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This short paper argues that agent-based models are an independent class of software application with their own unique properties, with the consequential need for the definition of suitable, tailored mutation operators. Testing agent-based models can be very challenging, and no established testing technique has yet been introduced for such systems. This paper discusses the application of mutation testing techniques, and mutation operators are proposed that can imitate potential programmer errors and result in faulty simulation runs of a model.
{"title":"Mutation Operators for Agent-Based Models","authors":"S. Adra, Phil McMinn","doi":"10.1109/ICSTW.2010.9","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.9","url":null,"abstract":"This short paper argues that agent-based models are an independent class of software application with their own unique properties, with the consequential need for the definition of suitable, tailored mutation operators. Testing agent-based models can be very challenging, and no established testing technique has yet been introduced for such systems. This paper discusses the application of mutation testing techniques, and mutation operators are proposed that can imitate potential programmer errors and result in faulty simulation runs of a model.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123771141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Testing is a challenging activity for many software engineering projects, especially for large-scale systems. The amount of tests cases can range from a few hundred to several thousands, requiring significant computing resources and lengthy execution times. Cloud computing offers the potential to address both of these issues: it offers resources such as virtualized hardware, effectively unlimited storage, and software services that can aid in reducing the execution time of large test suites in a cost-effective manner. However, migrating to the cloud is not without cost, nor is it necessarily the best solution to all testing problems. This paper discusses when to migrate software testing to the cloud from two perspectives: the characteristics of an application under test, and the types of testing performed on the application.
{"title":"When to Migrate Software Testing to the Cloud?","authors":"T. Parveen, S. Tilley","doi":"10.1109/ICSTW.2010.77","DOIUrl":"https://doi.org/10.1109/ICSTW.2010.77","url":null,"abstract":"Testing is a challenging activity for many software engineering projects, especially for large-scale systems. The amount of tests cases can range from a few hundred to several thousands, requiring significant computing resources and lengthy execution times. Cloud computing offers the potential to address both of these issues: it offers resources such as virtualized hardware, effectively unlimited storage, and software services that can aid in reducing the execution time of large test suites in a cost-effective manner. However, migrating to the cloud is not without cost, nor is it necessarily the best solution to all testing problems. This paper discusses when to migrate software testing to the cloud from two perspectives: the characteristics of an application under test, and the types of testing performed on the application.","PeriodicalId":117410,"journal":{"name":"2010 Third International Conference on Software Testing, Verification, and Validation Workshops","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132201611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}