The authors report on a model of error detection called RELAY, which provides a fault-based criterion for test data selection. The RELAY model builds on the testing theory introduced by M.H. Morell (1981), where an error is created when an incorrect state is introduced at some fault location and is propagated if it persists to the output. The authors refine this theory by more precisely defining the notion of when an error is introduced and by differentiating between the persistence of an error through computations and its persistence through data-flow operations. They introduce similar concepts, origination and transfer, as the first erroneous evaluation and the persistence of that erroneous evaluation respectively.<>
{"title":"The RELAY model of error detection and its application","authors":"Debra J. Richardson, Margaret C. Thompson","doi":"10.1109/WST.1988.5378","DOIUrl":"https://doi.org/10.1109/WST.1988.5378","url":null,"abstract":"The authors report on a model of error detection called RELAY, which provides a fault-based criterion for test data selection. The RELAY model builds on the testing theory introduced by M.H. Morell (1981), where an error is created when an incorrect state is introduced at some fault location and is propagated if it persists to the output. The authors refine this theory by more precisely defining the notion of when an error is introduced and by differentiating between the persistence of an error through computations and its persistence through data-flow operations. They introduce similar concepts, origination and transfer, as the first erroneous evaluation and the persistence of that erroneous evaluation respectively.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124348669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors argue that strong mutation testing and weak mutation testing are in fact extreme ends of a spectrum of mutation approaches. The term firm mutation is introduced to represent the middle ground in this spectrum. The authors also argue, by means of a number of small examples, that there is a potential problem concerning the criterion for deciding whether a mutant is dead or live. A variety of solutions are suggested. Practical considerations for a firm-mutation testing system, with greater user control over the nature of result comparison, are discussed. Such a system is currently under development as part of an interpretive development environment.<>
{"title":"From weak to strong, dead or alive? an analysis of some mutation testing issues","authors":"M. Woodward, K. Halewood","doi":"10.1109/WST.1988.5370","DOIUrl":"https://doi.org/10.1109/WST.1988.5370","url":null,"abstract":"The authors argue that strong mutation testing and weak mutation testing are in fact extreme ends of a spectrum of mutation approaches. The term firm mutation is introduced to represent the middle ground in this spectrum. The authors also argue, by means of a number of small examples, that there is a potential problem concerning the criterion for deciding whether a mutant is dead or live. A variety of solutions are suggested. Practical considerations for a firm-mutation testing system, with greater user control over the nature of result comparison, are discussed. Such a system is currently under development as part of an interpretive development environment.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130601457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The author presents a framework that characterizes fault-based testing schemes based on how many prespecified faults are considered and on the contextual information used to deduce the absence of those faults. Established methods of fault-based testing are placed within this framework. Most methods either are limited to finite fault classes or focus on local effects of faults rather than global effects. A novel method of fault-based testing, called symbolic testing, is presented by which infinitely many prespecified faults can be proved to be absent from a program on the basis of the global effect the faults would have if they were present. Circumstances are discussed as to when testing with a finite test set is sufficient to prove that infinitely many prespecified faults are not present in program.<>
{"title":"Theoretical insights into fault-based testing","authors":"Larry Morell","doi":"10.1109/WST.1988.5353","DOIUrl":"https://doi.org/10.1109/WST.1988.5353","url":null,"abstract":"The author presents a framework that characterizes fault-based testing schemes based on how many prespecified faults are considered and on the contextual information used to deduce the absence of those faults. Established methods of fault-based testing are placed within this framework. Most methods either are limited to finite fault classes or focus on local effects of faults rather than global effects. A novel method of fault-based testing, called symbolic testing, is presented by which infinitely many prespecified faults can be proved to be absent from a program on the basis of the global effect the faults would have if they were present. Circumstances are discussed as to when testing with a finite test set is sufficient to prove that infinitely many prespecified faults are not present in program.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"5 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120975851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A recently developed experimental integrated system for testing and debugging (STAD) is presented. Its testing part supports three data-flow coverage criteria. The debugging part guides the programmer in the localization of faults by generating and interactively verifying hypotheses about their location. An example is given to illustrate the process, followed by a debugging session, a discussion of the principles of data-flow testing, a structural testing scenario, and an introduction to the debugging principles in STAD.<>
{"title":"STAD-a system for testing and debugging: user perspective","authors":"B. Korel, J. Laski","doi":"10.1109/WST.1988.5350","DOIUrl":"https://doi.org/10.1109/WST.1988.5350","url":null,"abstract":"A recently developed experimental integrated system for testing and debugging (STAD) is presented. Its testing part supports three data-flow coverage criteria. The debugging part guides the programmer in the localization of faults by generating and interactively verifying hypotheses about their location. An example is given to illustrate the process, followed by a debugging session, a discussion of the principles of data-flow testing, a structural testing scenario, and an introduction to the debugging principles in STAD.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131851034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors describe how software testing using mutation analysis can be performed efficiently on an SIMD machine. They develop a technique that permits unified scheduling of multiple mutant programs on a very large SIMD machine. They believe that supercomputers with novel architectures can be used to enhance software productivity by using techniques like the one proposed.<>
{"title":"High performance testing on SIMD machines","authors":"E. W. Krauser, Aditya P. Mathur, V. Rego","doi":"10.1109/WST.1988.5372","DOIUrl":"https://doi.org/10.1109/WST.1988.5372","url":null,"abstract":"The authors describe how software testing using mutation analysis can be performed efficiently on an SIMD machine. They develop a technique that permits unified scheduling of multiple mutant programs on a very large SIMD machine. They believe that supercomputers with novel architectures can be used to enhance software productivity by using techniques like the one proposed.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117277777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large-scale experiment comparing software fault tolerance and software fault elimination as approaches to improving software reliability is described. Results are examined that apply to the appropriateness and underlying assumption of the two i.e., reducing standard testing procedures when using voting to achieve fault-tolerance in operational software and using voting in the testing process. Among other results, it was found that n-version programming did not tolerate most of the faults detected by the fault elimination techniques. The results also cast doubt on the effectiveness of using voting as a test oracle.<>
{"title":"An empirical comparison of software fault tolerance and fault elimination","authors":"T. Shimeall, N. Leveson","doi":"10.1109/WST.1988.5373","DOIUrl":"https://doi.org/10.1109/WST.1988.5373","url":null,"abstract":"A large-scale experiment comparing software fault tolerance and software fault elimination as approaches to improving software reliability is described. Results are examined that apply to the appropriateness and underlying assumption of the two i.e., reducing standard testing procedures when using voting to achieve fault-tolerance in operational software and using voting in the testing process. Among other results, it was found that n-version programming did not tolerate most of the faults detected by the fault elimination techniques. The results also cast doubt on the effectiveness of using voting as a test oracle.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125865816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For software products, there are three basic costs: cost of failure; cost of development; and cost of maintenance. Today, the main leverage points are cost of product development and maintenance. It is argued that to transfer testing and verification technology, it is necessary to persuade software product managers that using new technology will reduce their development and maintenance costs. The manager also must be persuaded that the cost of installing the technology will be overtaken rapidly by savings accrued. The author feels that the key issues that must be dealt with include: how well prepared are any of us to make any kind of argument that the use of testing and verification technology reduces costs, what costs they reduce, how much, and where the convincing demonstrations of cost reduction are.<>
{"title":"Transferring testing and verification technology to industry","authors":"D. Good","doi":"10.1109/WST.1988.5381","DOIUrl":"https://doi.org/10.1109/WST.1988.5381","url":null,"abstract":"For software products, there are three basic costs: cost of failure; cost of development; and cost of maintenance. Today, the main leverage points are cost of product development and maintenance. It is argued that to transfer testing and verification technology, it is necessary to persuade software product managers that using new technology will reduce their development and maintenance costs. The manager also must be persuaded that the cost of installing the technology will be overtaken rapidly by savings accrued. The author feels that the key issues that must be dealt with include: how well prepared are any of us to make any kind of argument that the use of testing and verification technology reduces costs, what costs they reduce, how much, and where the convincing demonstrations of cost reduction are.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129579563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The author presents results of a software reliability experiment that investigates the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multiversion experiment and used the launch interceptor problem as a model problem. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault-interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations and for the use of this test method in other applications.<>
{"title":"Automatically generated acceptance test: a software reliability experiment","authors":"P.W. Protzel","doi":"10.1109/WST.1988.5375","DOIUrl":"https://doi.org/10.1109/WST.1988.5375","url":null,"abstract":"The author presents results of a software reliability experiment that investigates the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multiversion experiment and used the launch interceptor problem as a model problem. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault-interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations and for the use of this test method in other applications.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128044946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurrent programs are analyzed by deriving constraints on the feasible SYN-sequences of a concurrent program according to the programs's syntactic and semantic information. These constraints, called feasibility constraints, show restrictions on the ordering of synchronization events allowed by the program. By using feasibility constraints, one can obtain a better approximation of the feasibility set of a concurrent program and improve the effectiveness of error detection by static analysis.<>
{"title":"A semantics-based approach to analyzing concurrent programs","authors":"R. Carver, K. Tai","doi":"10.1109/WST.1988.5365","DOIUrl":"https://doi.org/10.1109/WST.1988.5365","url":null,"abstract":"Concurrent programs are analyzed by deriving constraints on the feasible SYN-sequences of a concurrent program according to the programs's syntactic and semantic information. These constraints, called feasibility constraints, show restrictions on the ordering of synchronization events allowed by the program. By using feasibility constraints, one can obtain a better approximation of the feasibility set of a concurrent program and improve the effectiveness of error detection by static analysis.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133734481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two different approaches that use symbolic execution were used to prove partial correctness and general safety properties of Ada programs. One approach is based on interleaving the task components while the other is based on verifying the tasks in isolation and then performing cooperation proofs. Both approaches extend past efforts by incorporating tasking proof rules into the symbolic executor, allowing Ada programs with tasking to be formally verified. The limitations of each approach are presented, along with each approach's advantages and disadvantages. In particular, the difficulty of dealing with communication statements in a loop structure is addressed in detail.<>
{"title":"An experience with two symbolic execution-based approaches to formal verification of Ada tasking programs","authors":"L. Dillon, R. Kemmerer, L. J. Harrison","doi":"10.1109/WST.1988.5363","DOIUrl":"https://doi.org/10.1109/WST.1988.5363","url":null,"abstract":"Two different approaches that use symbolic execution were used to prove partial correctness and general safety properties of Ada programs. One approach is based on interleaving the task components while the other is based on verifying the tasks in isolation and then performing cooperation proofs. Both approaches extend past efforts by incorporating tasking proof rules into the symbolic executor, allowing Ada programs with tasking to be formally verified. The limitations of each approach are presented, along with each approach's advantages and disadvantages. In particular, the difficulty of dealing with communication statements in a loop structure is addressed in detail.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123572081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}