After a brief overview of the m-EVES system, an application of m-EVES to a proof of a nontrivial security property (noninterference) for a pedagogical computer system (the Low Water Mark system) is discussed. An example demonstrates some of the power and novel features of m-EVES. The author concludes with a comparison of the m-EVES solution with similar efforts using the Gypsy verification environment and the Boyer-Moore theorem prover.<>
{"title":"An application of the m-EVES verification system","authors":"D. Craigen","doi":"10.1109/WST.1988.5351","DOIUrl":"https://doi.org/10.1109/WST.1988.5351","url":null,"abstract":"After a brief overview of the m-EVES system, an application of m-EVES to a proof of a nontrivial security property (noninterference) for a pedagogical computer system (the Low Water Mark system) is discussed. An example demonstrates some of the power and novel features of m-EVES. The author concludes with a comparison of the m-EVES solution with similar efforts using the Gypsy verification environment and the Boyer-Moore theorem prover.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130498309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A program design methodology is presented that advocates the synthesis of tests hand-in-hand with the design at every stage of program development and uses them for early detection of design flaws. It involves formal specifications of abstract programs and abstract data refinement that appear in the design. The main findings are: (1) formalization facilitates black-box and design-based functional testing; (2) abstract data testing allows a more natural selection of tests than concrete data testing; (3) black-box testing leads to significant structural coverage testing; and (4) the method can be combined with formal verification.<>
{"title":"Testing in top-down program development","authors":"J. Laski","doi":"10.1109/WST.1988.5355","DOIUrl":"https://doi.org/10.1109/WST.1988.5355","url":null,"abstract":"A program design methodology is presented that advocates the synthesis of tests hand-in-hand with the design at every stage of program development and uses them for early detection of design flaws. It involves formal specifications of abstract programs and abstract data refinement that appear in the design. The main findings are: (1) formalization facilitates black-box and design-based functional testing; (2) abstract data testing allows a more natural selection of tests than concrete data testing; (3) black-box testing leads to significant structural coverage testing; and (4) the method can be combined with formal verification.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114726238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A distributed debugging methodology for data-flow architectures is presented. A graphical debugging simulator for a data-flow machine is being developed to debug compiled high-level data-flow programs. The ideas of the debugging methodology are outlined and the debugging simulator is described. Special emphasis is paid to the multipass feature of the debugging simulator, which solves the nonreproducibility problem of distributed debuggers and allows the user to execute the program more than once with the identical instruction sequence to be sure that a fault has been removed.<>
{"title":"A methodology and distributed tool for debugging dataflow programs","authors":"N. J. Wahl, S. Schach","doi":"10.1109/WST.1988.5358","DOIUrl":"https://doi.org/10.1109/WST.1988.5358","url":null,"abstract":"A distributed debugging methodology for data-flow architectures is presented. A graphical debugging simulator for a data-flow machine is being developed to debug compiled high-level data-flow programs. The ideas of the debugging methodology are outlined and the debugging simulator is described. Special emphasis is paid to the multipass feature of the debugging simulator, which solves the nonreproducibility problem of distributed debuggers and allows the user to execute the program more than once with the identical instruction sequence to be sure that a fault has been removed.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114061012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A survey is presented of recent developments in protocol validation for open systems interconnection. The discussion includes two important components any protocol test system must have: test sequence generator and trace checker, as well as protocol verification techniques. It covers protocol specification techniques and conformance testing, test sequences, two main test selection tools, trace checking, and verification.<>
{"title":"Protocol test generation, trace analysis and verification techniques","authors":"B. Sarikaya","doi":"10.1109/WST.1988.5364","DOIUrl":"https://doi.org/10.1109/WST.1988.5364","url":null,"abstract":"A survey is presented of recent developments in protocol validation for open systems interconnection. The discussion includes two important components any protocol test system must have: test sequence generator and trace checker, as well as protocol verification techniques. It covers protocol specification techniques and conformance testing, test sequences, two main test selection tools, trace checking, and verification.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124270577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors improve the negative results published about partition testing and try to reconcile them with its intuitively perceived value. Partition testing is shown to be more valuable than random testing only when the partitions are narrowly based on expected faults and there is a good chance of failure. For gaining confidence for successful tests, partition testing as usually practiced has little value.<>
{"title":"Partition testing does not inspire confidence","authors":"R. Hamlet, R. Taylor","doi":"10.1109/WST.1988.5376","DOIUrl":"https://doi.org/10.1109/WST.1988.5376","url":null,"abstract":"The authors improve the negative results published about partition testing and try to reconcile them with its intuitively perceived value. Partition testing is shown to be more valuable than random testing only when the partitions are narrowly based on expected faults and there is a good chance of failure. For gaining confidence for successful tests, partition testing as usually practiced has little value.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128203427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A description is given of techniques for the automatic generation of large artificial software systems that can be used for laboratory studies of testing and integration strategies, reliability models, etc. A prototype generator is described that produces code for such systems by constructing a large number of nearly identical modules. This generator has been used to construct a family of systems that in theory can be made arbitrarily large. Several experiments were conducted to explore the sensitivity of the Jelinski-Moranda model to violations of the assumption that all defects have equal probability of being discovered.<>
{"title":"Artificial systems for software engineering studies","authors":"J. H. Rowland","doi":"10.1109/WST.1988.5356","DOIUrl":"https://doi.org/10.1109/WST.1988.5356","url":null,"abstract":"A description is given of techniques for the automatic generation of large artificial software systems that can be used for laboratory studies of testing and integration strategies, reliability models, etc. A prototype generator is described that produces code for such systems by constructing a large number of nearly identical modules. This generator has been used to construct a family of systems that in theory can be made arbitrarily large. Several experiments were conducted to explore the sensitivity of the Jelinski-Moranda model to violations of the assumption that all defects have equal probability of being discovered.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125658333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A description is given of a suite of tools to support analysis of properties of sequences associated with a specification, with input or output to a program, or with simple behavioral models of a system under design. The toolset's capabilities include: generating sequences to satisfy combinations of conditions, organizing these condition combinations as tables of cases to serve as test data and visualizing the effects of executing a chosen sequence. The technology base is Prolog extended with a powerful window package.<>
{"title":"The broad spectrum toolset for upstream testing, verification and analysis","authors":"S. L. Gerhart","doi":"10.1109/WST.1988.5349","DOIUrl":"https://doi.org/10.1109/WST.1988.5349","url":null,"abstract":"A description is given of a suite of tools to support analysis of properties of sequences associated with a specification, with input or output to a program, or with simple behavioral models of a system under design. The toolset's capabilities include: generating sequences to satisfy combinations of conditions, organizing these condition combinations as tables of cases to serve as test data and visualizing the effects of executing a chosen sequence. The technology base is Prolog extended with a powerful window package.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134329079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A given path-selection criterion is more selective than another such criterion with respect to some testing goal if it never requires more, and sometimes requires fewer, test paths to achieve that goal. The author presents canonical forms of control-flow and data-flow path selection criteria and demonstrates that, for some simple testing goals, the data-flow criteria as a general class are more selective than the control-flow criteria. It is shown, however, that this result does not hold for general testing goals, a limitation that appears to stem directly from the practice of defining data-flow criteria on the computation history contributing to a single result.<>
{"title":"Selectivity of data-flow and control-flow path criteria","authors":"S.J. Zeil","doi":"10.1109/WST.1988.5377","DOIUrl":"https://doi.org/10.1109/WST.1988.5377","url":null,"abstract":"A given path-selection criterion is more selective than another such criterion with respect to some testing goal if it never requires more, and sometimes requires fewer, test paths to achieve that goal. The author presents canonical forms of control-flow and data-flow path selection criteria and demonstrates that, for some simple testing goals, the data-flow criteria as a general class are more selective than the control-flow criteria. It is shown, however, that this result does not hold for general testing goals, a limitation that appears to stem directly from the practice of defining data-flow criteria on the computation history contributing to a single result.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128738863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current logic programming systems, as typified by Prolog, contain limitations which restrict their usefulness during the specification, design, and testing of software. A major limitation is the inability to perform analysis in the presence of incomplete information. Three sources of incompleteness are discussed. The result of this analysis suggests an efficient algorithm for a special case. Furthermore, it partitions the problem for testing purposes into three classes: (1) those points which satisfy the special case; (2) those which satisfy the general case but not the special case; and (3) those points which do not satisfy either. Thus, the analysis has uncovered possible structure within the implementation including the case in which the implementation has failed to address class 2 correctly (error of omission) but handle classes 1 and 3 correctly.<>
{"title":"Generic constraint logic programming and incompleteness in the analysis of software","authors":"C. Wild","doi":"10.1109/WST.1988.5368","DOIUrl":"https://doi.org/10.1109/WST.1988.5368","url":null,"abstract":"Current logic programming systems, as typified by Prolog, contain limitations which restrict their usefulness during the specification, design, and testing of software. A major limitation is the inability to perform analysis in the presence of incomplete information. Three sources of incompleteness are discussed. The result of this analysis suggests an efficient algorithm for a special case. Furthermore, it partitions the problem for testing purposes into three classes: (1) those points which satisfy the special case; (2) those which satisfy the general case but not the special case; and (3) those points which do not satisfy either. Thus, the analysis has uncovered possible structure within the implementation including the case in which the implementation has failed to address class 2 correctly (error of omission) but handle classes 1 and 3 correctly.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127058925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel method of program mutation is presented that increases the feasibility, effectiveness, and efficiency of searching for those errors that have escaped the activities of Afterers and competent programmers. It is based on syntax direction and it is aided by the language semantics. This means that the scope of a program mutation (i.e. the mutation rules of the method) and its corresponding mutants are rigorously directed by a syntax and related semantics as defined by the tester. A paradigm for the mutation syntax and semantics when limited to Boolean expressions and the corresponding test coverage metrics are given.<>
{"title":"A practical method for software quality control via program mutation","authors":"D. Wu, M. Hennell, D. Hedley, I. J. Riddell","doi":"10.1109/WST.1988.5371","DOIUrl":"https://doi.org/10.1109/WST.1988.5371","url":null,"abstract":"A novel method of program mutation is presented that increases the feasibility, effectiveness, and efficiency of searching for those errors that have escaped the activities of Afterers and competent programmers. It is based on syntax direction and it is aided by the language semantics. This means that the scope of a program mutation (i.e. the mutation rules of the method) and its corresponding mutants are rigorously directed by a syntax and related semantics as defined by the tester. A paradigm for the mutation syntax and semantics when limited to Boolean expressions and the corresponding test coverage metrics are given.<<ETX>>","PeriodicalId":269073,"journal":{"name":"[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1988-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134405643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}