Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069038
S. Bhadra, Alexander P. Conrad, Charles Hurkes, B. Kirklin, G. M. Kapfhammer
Software for memory constrained mobile devices is often implemented in the Java programming language because the Java compiler and virtual machine (JVM) provide enhanced safety, portability, and the potential for run-time optimization. However, testing time may increase substantially when memory is limited and the JVM employs a compiler to create native code bodies. This paper furnishes an empirical study that identifies the fundamental trade-offs associated with a method that uses adaptive native code unloading to perform memory constrained testing. The experimental results demonstrate that code unloading can reduce testing time by 17% and the code size of the test suite and application under test by 68% while maintaining the overall size of the JVM. We also find that the goal of reducing the space overhead of an automated testing technique is often at odds with the objective of decreasing the time required to test. Additional experiments reveal that using a complete record of test suite behavior, in contrast to a sample-based profile, does not enable the code unloader to make decisions that markedly reduce testing time. Finally, we identify test suite and application behaviors that may limit the effectiveness of our method for memory constrained test execution and we suggest ways to mitigate these challenges.
{"title":"An experimental study of methods for executing test suites in memory constrained environments","authors":"S. Bhadra, Alexander P. Conrad, Charles Hurkes, B. Kirklin, G. M. Kapfhammer","doi":"10.1109/IWAST.2009.5069038","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069038","url":null,"abstract":"Software for memory constrained mobile devices is often implemented in the Java programming language because the Java compiler and virtual machine (JVM) provide enhanced safety, portability, and the potential for run-time optimization. However, testing time may increase substantially when memory is limited and the JVM employs a compiler to create native code bodies. This paper furnishes an empirical study that identifies the fundamental trade-offs associated with a method that uses adaptive native code unloading to perform memory constrained testing. The experimental results demonstrate that code unloading can reduce testing time by 17% and the code size of the test suite and application under test by 68% while maintaining the overall size of the JVM. We also find that the goal of reducing the space overhead of an automated testing technique is often at odds with the objective of decreasing the time required to test. Additional experiments reveal that using a complete record of test suite behavior, in contrast to a sample-based profile, does not enable the code unloader to make decisions that markedly reduce testing time. Finally, we identify test suite and application behaviors that may limit the effectiveness of our method for memory constrained test execution and we suggest ways to mitigate these challenges.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116186189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069047
Venkita Subramonian, Eric Cheung, G. Karam
In this paper, we describe our experience with automated testing of a mission-critical internal Voice-over-IP (VoIP) conferencing application which presents a web interface as well as a voice interface. We document the challenges that we had to overcome when testing this application and then present our solution using open source testing tools. The lessons learned from this experience may be applicable to a broad class of applications that pose similar testing challenges.
{"title":"Automated testing of a converged conferencing application","authors":"Venkita Subramonian, Eric Cheung, G. Karam","doi":"10.1109/IWAST.2009.5069047","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069047","url":null,"abstract":"In this paper, we describe our experience with automated testing of a mission-critical internal Voice-over-IP (VoIP) conferencing application which presents a web interface as well as a voice interface. We document the challenges that we had to overcome when testing this application and then present our solution using open source testing tools. The lessons learned from this experience may be applicable to a broad class of applications that pose similar testing challenges.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114261894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069049
Daniel Lübke, Leif Singer, Alex Salnikow
Assessing the quality of tests for BPEL processes is a diffcult task in projects following SOA principles.
在遵循SOA原则的项目中,评估BPEL流程的测试质量是一项困难的任务。
{"title":"Calculating BPEL test coverage through instrumentation","authors":"Daniel Lübke, Leif Singer, Alex Salnikow","doi":"10.1109/IWAST.2009.5069049","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069049","url":null,"abstract":"Assessing the quality of tests for BPEL processes is a diffcult task in projects following SOA principles.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125778281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069051
Birgit Hofer, B. Peischl, F. Wotawa
In this article we report on the development of a graphical user interface-savvy test monkey and its successful application to the Windows calculator. Our novel test monkey allows for a pragmatic approach in providing an abstract model of the GUI relevant behavior of the application under test and relies on a readily available GUI automation tool. Besides of outlining the employed test oracles we explain our novel decision-based state machine model, the associated language and the random test algorithm. Moreover we outline the pragmatic model creation concept and report on its concrete application in an end-to-end test setting with a Windows Vista front-end. Notably in this specific scenario, our novel monkey was able to identify a misbehavior in a well-established application and provided valuable insight for reproducing the detected fault.
{"title":"GUI savvy end-to-end testing with smart monkeys","authors":"Birgit Hofer, B. Peischl, F. Wotawa","doi":"10.1109/IWAST.2009.5069051","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069051","url":null,"abstract":"In this article we report on the development of a graphical user interface-savvy test monkey and its successful application to the Windows calculator. Our novel test monkey allows for a pragmatic approach in providing an abstract model of the GUI relevant behavior of the application under test and relies on a readily available GUI automation tool. Besides of outlining the employed test oracles we explain our novel decision-based state machine model, the associated language and the random test algorithm. Moreover we outline the pragmatic model creation concept and report on its concrete application in an end-to-end test setting with a Windows Vista front-end. Notably in this specific scenario, our novel monkey was able to identify a misbehavior in a well-established application and provided valuable insight for reproducing the detected fault.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128809086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069037
G. Fraser, A. Gargantini
Many different techniques have been proposed to address the problem of automated test case generation, varying in a range of properties and resulting in very different test cases. In this paper we investigate the effects of the test case length on resulting test suites: Intuitively, longer test cases should serve to find more difficult faults but will reduce the number of test cases necessary to achieve the test objectives. On the other hand longer test cases have disadvantages such as higher computational costs and they are more difficult to interpret manually. Consequently, should one aim to generate many short test cases or fewer but longer test cases? We present the results of a set of experiments performed in a scenario of specification based testing for reactive systems. As expected, a long test case can achieve higher coverage and fault detecting capability than a short one, while giving preference to longer test cases in general can help reduce the size of test suites but can also have the opposite effect, for example, if minimization is applied.
{"title":"Experiments on the test case length in specification based test case generation","authors":"G. Fraser, A. Gargantini","doi":"10.1109/IWAST.2009.5069037","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069037","url":null,"abstract":"Many different techniques have been proposed to address the problem of automated test case generation, varying in a range of properties and resulting in very different test cases. In this paper we investigate the effects of the test case length on resulting test suites: Intuitively, longer test cases should serve to find more difficult faults but will reduce the number of test cases necessary to achieve the test objectives. On the other hand longer test cases have disadvantages such as higher computational costs and they are more difficult to interpret manually. Consequently, should one aim to generate many short test cases or fewer but longer test cases? We present the results of a set of experiments performed in a scenario of specification based testing for reactive systems. As expected, a long test case can achieve higher coverage and fault detecting capability than a short one, while giving preference to longer test cases in general can help reduce the size of test suites but can also have the opposite effect, for example, if minimization is applied.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131279906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069044
Mengxiang Lin, Yin-li Chen, Kai Yu, Guo-shi Wu
Some program structures in modern programming languages can not be reasoned about symbolically. Lazy symbolic evaluation as proposed in this paper introduces a lazy evaluation strategy into traditional symbolic execution in order to address the issue. Constraint variables in path constraints generated by lazy symbolic evaluation may be input or intermediate variables. To eliminate the latter, concrete values for related input variables are first obtained by constraints solving or searching processes. Then, the given path is executed again using concrete and symbolic values. The procedure is repeated until the resulting path constraint is on input variables alone. We have implemented a prototype tool and performed several experiments. Preliminary results show the feasibility of our approach.
{"title":"Lazy symbolic evaluation and its path constraints solution","authors":"Mengxiang Lin, Yin-li Chen, Kai Yu, Guo-shi Wu","doi":"10.1109/IWAST.2009.5069044","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069044","url":null,"abstract":"Some program structures in modern programming languages can not be reasoned about symbolically. Lazy symbolic evaluation as proposed in this paper introduces a lazy evaluation strategy into traditional symbolic execution in order to address the issue. Constraint variables in path constraints generated by lazy symbolic evaluation may be input or intermediate variables. To eliminate the latter, concrete values for related input variables are first obtained by constraints solving or searching processes. Then, the given path is executed again using concrete and symbolic values. The procedure is repeated until the resulting path constraint is on input variables alone. We have implemented a prototype tool and performed several experiments. Preliminary results show the feasibility of our approach.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121650465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069036
B. P. Lamancha, Macario Polo, M. Piattini
This paper proposes an extension to the UML Testing Profile to manage variability in testing artifacts for software product lines. The proposed extension has two main points: (i) Defining an extended architecture for the UML Testing Profile to deal with variability in the test models, and (ii) Defining the behavior to include variation points in the SPL. To this aim, this work focuses on the test case behaviour represented by sequence diagrams and defines an extension to UML interactions for SPL.
{"title":"Towards an automated testing framework to manage variability using the UML Testing Profile","authors":"B. P. Lamancha, Macario Polo, M. Piattini","doi":"10.1109/IWAST.2009.5069036","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069036","url":null,"abstract":"This paper proposes an extension to the UML Testing Profile to manage variability in testing artifacts for software product lines. The proposed extension has two main points: (i) Defining an extended architecture for the UML Testing Profile to deal with variability in the test models, and (ii) Defining the behavior to include variation points in the SPL. To this aim, this work focuses on the test case behaviour represented by sequence diagrams and defines an extension to UML interactions for SPL.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132449404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069053
J. J. Li, D. Weiss
Testing has always been an indispensable part of software development. With the increasing amount of testing, the volume of data and information generated from testing grows substantially. The question arises on how to take advantage of the testing data, besides traditional coverage and debugging. In this paper, we propose an approach of using test trace data of a software application to its run-time user categorization. It collects test execution trace of programs studied by the software tool, and derives internal metrics of different categories from the trace information. During run time, we look at the user's artifacts as well as the user's behavior to categorize them into predetermined groups and serve them accordingly. Our work in-progress is to apply this method to a software product line, PolyFlow, including a web service that generates, runs, and analyzes test cases of programs under study. One benefit of our method is that it does not require storage of user profiles.
{"title":"Using testing trace for automatic user categorization","authors":"J. J. Li, D. Weiss","doi":"10.1109/IWAST.2009.5069053","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069053","url":null,"abstract":"Testing has always been an indispensable part of software development. With the increasing amount of testing, the volume of data and information generated from testing grows substantially. The question arises on how to take advantage of the testing data, besides traditional coverage and debugging. In this paper, we propose an approach of using test trace data of a software application to its run-time user categorization. It collects test execution trace of programs studied by the software tool, and derives internal metrics of different categories from the trace information. During run time, we look at the user's artifacts as well as the user's behavior to categorize them into predetermined groups and serve them accordingly. Our work in-progress is to apply this method to a software product line, PolyFlow, including a web service that generates, runs, and analyzes test cases of programs under study. One benefit of our method is that it does not require storage of user profiles.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133180560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069050
David Connolly, Frank Keenan, F. McCaffery
The importance of good software testing is often reported. Traditionally, acceptance testing is the last stage of the testing process before release to the customer. Unfortunately, it is not always appropriate to wait so long for customer feedback. Emerging agile methods recognise this and promote close interaction between the customer and developers for early acceptance testing, often before implementation commences. Indeed, Acceptance Test Driven Development (ATDD) is a process that uses customer interaction to define tests and tool support to automate and execute these. However, with existing tools, tests are usually written from new descriptions or rewritten from existing documentation. Here, the challenge is to allow developers and customers to annotate existing documentation and automatically generate acceptance tests without rewrites or new descriptions. This paper introduces the related ideas and describes a particular experiment that assesses the value of using annotated text to create acceptance tests.
{"title":"Developing acceptance tests from existing documentation using annotations: An experiment","authors":"David Connolly, Frank Keenan, F. McCaffery","doi":"10.1109/IWAST.2009.5069050","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069050","url":null,"abstract":"The importance of good software testing is often reported. Traditionally, acceptance testing is the last stage of the testing process before release to the customer. Unfortunately, it is not always appropriate to wait so long for customer feedback. Emerging agile methods recognise this and promote close interaction between the customer and developers for early acceptance testing, often before implementation commences. Indeed, Acceptance Test Driven Development (ATDD) is a process that uses customer interaction to define tests and tool support to automate and execute these. However, with existing tools, tests are usually written from new descriptions or rewritten from existing documentation. Here, the challenge is to allow developers and customers to annotate existing documentation and automatically generate acceptance tests without rewrites or new descriptions. This paper introduces the related ideas and describes a particular experiment that assesses the value of using annotated text to create acceptance tests.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128779182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-05-18DOI: 10.1109/IWAST.2009.5069040
Alberto Avritzer, E. Weyuker
We present a new approach for the automated generation of test cases to be used for demonstrating the reliability of large industrial mission-critical systems. In this paper we extend earlier work by adding failure tracking and transient Markov chain analysis. Results from the transient Markov chain analysis are used to estimate the software reliability at a given system execution time.
{"title":"The automated generation of test cases using an extended domain based reliability model","authors":"Alberto Avritzer, E. Weyuker","doi":"10.1109/IWAST.2009.5069040","DOIUrl":"https://doi.org/10.1109/IWAST.2009.5069040","url":null,"abstract":"We present a new approach for the automated generation of test cases to be used for demonstrating the reliability of large industrial mission-critical systems. In this paper we extend earlier work by adding failure tracking and transient Markov chain analysis. Results from the transient Markov chain analysis are used to estimate the software reliability at a given system execution time.","PeriodicalId":401585,"journal":{"name":"2009 ICSE Workshop on Automation of Software Test","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127709331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}