The authors present a tool-based approach that examines how example programs reuse a particular library. The approach can facilitate reuse by: (1) guiding the developer towards important library classes of general utility; (2) guiding the developer towards library classes particularly useful for a specific application domain; and (3) providing access to the relevant source code in each example for further inspection. The approach is supported by CodeWeb, a reuse tool they have built for C++ and Java libraries.
{"title":"Illustrating object-oriented library reuse by example: a tool-based approach","authors":"Amir Michail, D. Notkin","doi":"10.1109/ASE.1998.732640","DOIUrl":"https://doi.org/10.1109/ASE.1998.732640","url":null,"abstract":"The authors present a tool-based approach that examines how example programs reuse a particular library. The approach can facilitate reuse by: (1) guiding the developer towards important library classes of general utility; (2) guiding the developer towards library classes particularly useful for a specific application domain; and (3) providing access to the relevant source code in each example for further inspection. The approach is supported by CodeWeb, a reuse tool they have built for C++ and Java libraries.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122091731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting global predicates in a distributed program is a useful tool for debugging and testing the program. Past research has considered several restricted forms of predicates, including conjunctive predicates and linked predicates, and their detection algorithms. The authors introduce an exclusive OR global predicates to describe exclusive usages of shared resources in distributed programs. An exclusive OR global predicate holds for a given run only when one or zero local predicate is true at every consistent global state during the run. One exclusive OR global predicate is enough to describe the mutual exclusion condition of n processes, while it takes O(n/sup 2/) conjunctive predicates. Moreover, the exclusive OR condition is easily detectable by sequentializing all true events in a given run. A centralized algorithm of detecting exclusive OR global predicates is presented.
{"title":"Detection of exclusive OR global predicates","authors":"E. Lee, C. Park, D. Lee","doi":"10.1109/ASE.1998.732649","DOIUrl":"https://doi.org/10.1109/ASE.1998.732649","url":null,"abstract":"Detecting global predicates in a distributed program is a useful tool for debugging and testing the program. Past research has considered several restricted forms of predicates, including conjunctive predicates and linked predicates, and their detection algorithms. The authors introduce an exclusive OR global predicates to describe exclusive usages of shared resources in distributed programs. An exclusive OR global predicate holds for a given run only when one or zero local predicate is true at every consistent global state during the run. One exclusive OR global predicate is enough to describe the mutual exclusion condition of n processes, while it takes O(n/sup 2/) conjunctive predicates. Moreover, the exclusive OR condition is easily detectable by sequentializing all true events in a given run. A centralized algorithm of detecting exclusive OR global predicates is presented.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124237611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe part of an STL conformance test suite under development. Test suites for all of the STL containers have been written, demonstrating the feasibility of thorough and highly automated testing of industrial component libraries. We describe affordable test suites that provide good code and boundary value coverage, including the thousands of cases that naturally occur from combinations of boundary values. We show how two simple oracles can provide fully automated output checking for all the containers. We refine the traditional categories of black-box and white-box testing to specification-based, implementation-based and implementation-dependent testing, and show how these three categories highlight the key cost/thoroughness trade-offs.
{"title":"Programmatic testing of the Standard Template Library containers","authors":"J. McDonald, D. Hoffman, P. Strooper","doi":"10.1109/ASE.1998.732610","DOIUrl":"https://doi.org/10.1109/ASE.1998.732610","url":null,"abstract":"We describe part of an STL conformance test suite under development. Test suites for all of the STL containers have been written, demonstrating the feasibility of thorough and highly automated testing of industrial component libraries. We describe affordable test suites that provide good code and boundary value coverage, including the thousands of cases that naturally occur from combinations of boundary values. We show how two simple oracles can provide fully automated output checking for all the containers. We refine the traditional categories of black-box and white-box testing to specification-based, implementation-based and implementation-dependent testing, and show how these three categories highlight the key cost/thoroughness trade-offs.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131611563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Starting from a graphical data model (a subset of the OMT object model), a skeleton of formal specification can be generated and completed to express several constraints and provide a precise formal data description. Then standard operations to modify instances of this data model can be systematically specified. Since these operations may invalidate the constraints, it is interesting to identify their pre-conditions. In this paper, the Z-EVES theorem prover is used to calculate and try to simplify the pre-conditions of these operations. Then the developer may identify a set of conditions and use the prover to verify that they logically imply the pre-condition.
{"title":"Identifying pre-conditions with the Z/EVES theorem prover","authors":"Y. Ledru","doi":"10.1109/ASE.1998.732566","DOIUrl":"https://doi.org/10.1109/ASE.1998.732566","url":null,"abstract":"Starting from a graphical data model (a subset of the OMT object model), a skeleton of formal specification can be generated and completed to express several constraints and provide a precise formal data description. Then standard operations to modify instances of this data model can be systematically specified. Since these operations may invalidate the constraints, it is interesting to identify their pre-conditions. In this paper, the Z-EVES theorem prover is used to calculate and try to simplify the pre-conditions of these operations. Then the developer may identify a set of conditions and use the prover to verify that they logically imply the pre-condition.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114812377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Formal approaches to software reuse rely heavily upon a specification matching criterion, where a search query using formal specifications is used to search a library of components indexed by specifications. In previous investigations, we addressed the use of formal methods and component libraries to support software reuse and construction of software based on component specifications. A difficulty for all formal approaches to software reuse is the creation of the formal indices. We have developed an approach to reverse engineering that is based on the use of formal methods to derive formal specifications of existing programs. In this paper, we present an approach for combining software reverse engineering and software reuse to support populating specification libraries for the purposes of software reuse. In addition, we discuss the results of our initial investigations into the use of tools to support an entire process of populating and using a specification library to construct a software application.
{"title":"An automated approach for supporting software reuse via reverse engineering","authors":"G. Gannod, Yonghao Chen, B. Cheng","doi":"10.1109/ASE.1998.732586","DOIUrl":"https://doi.org/10.1109/ASE.1998.732586","url":null,"abstract":"Formal approaches to software reuse rely heavily upon a specification matching criterion, where a search query using formal specifications is used to search a library of components indexed by specifications. In previous investigations, we addressed the use of formal methods and component libraries to support software reuse and construction of software based on component specifications. A difficulty for all formal approaches to software reuse is the creation of the formal indices. We have developed an approach to reverse engineering that is based on the use of formal methods to derive formal specifications of existing programs. In this paper, we present an approach for combining software reverse engineering and software reuse to support populating specification libraries for the purposes of software reuse. In addition, we discuss the results of our initial investigations into the use of tools to support an entire process of populating and using a specification library to construct a software application.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129124660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Structural testing criteria are mandated in many software development standards and guidelines. The process of generating test data to achieve 100% coverage of a given structural coverage metric is labour-intensive and expensive. This paper presents an approach to automate the generation of such test data. The test-data generation is based on the application of a dynamic optimisation-based search for the required test data. The same approach can be generalised to solve other test-data generation problems. Three such applications are discussed-boundary value analysis, assertion/run-time exception testing, and component re-use testing. A prototype tool-set has been developed to facilitate the automatic generation of test data for these structural testing problems. The results of preliminary experiments using this technique and the prototype tool-set are presented and show the efficiency and effectiveness of this approach.
{"title":"An automated framework for structural test-data generation","authors":"N. Tracey, J. A. Clark, K. Mander, J. Mcdermid","doi":"10.1109/ASE.1998.732680","DOIUrl":"https://doi.org/10.1109/ASE.1998.732680","url":null,"abstract":"Structural testing criteria are mandated in many software development standards and guidelines. The process of generating test data to achieve 100% coverage of a given structural coverage metric is labour-intensive and expensive. This paper presents an approach to automate the generation of such test data. The test-data generation is based on the application of a dynamic optimisation-based search for the required test data. The same approach can be generalised to solve other test-data generation problems. Three such applications are discussed-boundary value analysis, assertion/run-time exception testing, and component re-use testing. A prototype tool-set has been developed to facilitate the automatic generation of test data for these structural testing problems. The results of preliminary experiments using this technique and the prototype tool-set are presented and show the efficiency and effectiveness of this approach.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127959588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. V. Baalen, P. Robinson, M. Lowry, T. Pressburger
Motivated by NASA's need for high-assurance software, NASA Ames' Amphion project has developed a generic program generation system based on deductive synthesis. Amphion has a number of advantages, such as the ability to develop a new synthesis system simply by writing a declarative domain theory. However, as a practical matter, the validation of the domain theory for such a system is problematic because the link between generated programs and the domain theory is complex. As a result, when generated programs do not behave as expected, it is difficult to isolate the cause, whether it be an incorrect problem specification or an error in the domain theory. The paper describes a tool being developed that provides formal traceability between specifications and generated code for deductive synthesis systems. It is based on extensive instrumentation of the refutation-based theorem prover used to synthesize programs. It takes augmented proof structures and abstracts them to provide explanations of the relation between a specification, a domain theory, and synthesized code. In generating these explanations, the tool exploits the structure of Amphion domain theories, so the end user is not confronted with the intricacies of raw proof traces. This tool is crucial for the validation of domain theories as well as being important in every-day use of the code synthesis system.
{"title":"Explaining synthesized software","authors":"J. V. Baalen, P. Robinson, M. Lowry, T. Pressburger","doi":"10.1109/ASE.1998.732661","DOIUrl":"https://doi.org/10.1109/ASE.1998.732661","url":null,"abstract":"Motivated by NASA's need for high-assurance software, NASA Ames' Amphion project has developed a generic program generation system based on deductive synthesis. Amphion has a number of advantages, such as the ability to develop a new synthesis system simply by writing a declarative domain theory. However, as a practical matter, the validation of the domain theory for such a system is problematic because the link between generated programs and the domain theory is complex. As a result, when generated programs do not behave as expected, it is difficult to isolate the cause, whether it be an incorrect problem specification or an error in the domain theory. The paper describes a tool being developed that provides formal traceability between specifications and generated code for deductive synthesis systems. It is based on extensive instrumentation of the refutation-based theorem prover used to synthesize programs. It takes augmented proof structures and abstracts them to provide explanations of the relation between a specification, a domain theory, and synthesized code. In generating these explanations, the tool exploits the structure of Amphion domain theories, so the end user is not confronted with the intricacies of raw proof traces. This tool is crucial for the validation of domain theories as well as being important in every-day use of the code synthesis system.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127422968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a tools environment which automates the validation and maintenance of a requirements model written in many-sorted first order logic. We focus on a translator that produces an executable form of the model; blame assignment functions, which input batches of mis-classified tests (i.e. training examples) and output likely faulty parts of the model; and a theory reviser; which inputs the faulty parts and examples and outputs suggested revisions to the model. In particular we concentrate on the problems encountered when applying these tools to a real application: a requirements model containing air traffic control separation standards, operating methods and airspace information.
{"title":"Towards the automated debugging and maintenance of logic-based requirements models","authors":"T. McCluskey, M. West","doi":"10.1109/ASE.1998.732591","DOIUrl":"https://doi.org/10.1109/ASE.1998.732591","url":null,"abstract":"We describe a tools environment which automates the validation and maintenance of a requirements model written in many-sorted first order logic. We focus on a translator that produces an executable form of the model; blame assignment functions, which input batches of mis-classified tests (i.e. training examples) and output likely faulty parts of the model; and a theory reviser; which inputs the faulty parts and examples and outputs suggested revisions to the model. In particular we concentrate on the problems encountered when applying these tools to a real application: a requirements model containing air traffic control separation standards, operating methods and airspace information.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133415096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of detecting and handling inconsistencies in software development processes using a graph based approach. It seems to be a natural way to express the various options and possibilities in attacking the problem. We apply the techniques developed in the area of software architecture design which uses in a structured way performance models in order to produce a design which incorporates also nonfunctional requirements in terms of quantitative performance.
{"title":"On detecting and handling inconsistencies in integrating software architecture design and performance evaluation","authors":"M. Goedicke, T. Meyer, C. Piwetz","doi":"10.1109/ASE.1998.732632","DOIUrl":"https://doi.org/10.1109/ASE.1998.732632","url":null,"abstract":"We consider the problem of detecting and handling inconsistencies in software development processes using a graph based approach. It seems to be a natural way to express the various options and possibilities in attacking the problem. We apply the techniques developed in the area of software architecture design which uses in a structured way performance models in order to produce a design which incorporates also nonfunctional requirements in terms of quantitative performance.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122448278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Describes a system that improves testing quality by supporting automatic test data selection, execution and result verification. The system tests poorly-encapsulated Ada units against formal specifications. This task is difficult partly because the unit's interface is not explicit, but rather is buried in the body/implementation code. We attack this problem by making the unit's interface explicit and complete. This is accomplished via automatic and manual analysis of the body. The complete interface is represented using an extended algebraic signature notation. Once the signature has been discovered, it can be reformulated so that a collection of well-defined, static mappings are established between it and the signature of the unit's formal specification. These mappings guide the development of test artifact transformers and oracles, which support automatic test data selection, execution and result verification. This paper discusses problems that arise as a result of testing under low encapsulation, discusses our solution using an ongoing example, and compares our solution to earlier solutions.
{"title":"Specification-based testing of Ada units with low encapsulation","authors":"A. Reyes, D. Richardson","doi":"10.1109/ASE.1998.732563","DOIUrl":"https://doi.org/10.1109/ASE.1998.732563","url":null,"abstract":"Describes a system that improves testing quality by supporting automatic test data selection, execution and result verification. The system tests poorly-encapsulated Ada units against formal specifications. This task is difficult partly because the unit's interface is not explicit, but rather is buried in the body/implementation code. We attack this problem by making the unit's interface explicit and complete. This is accomplished via automatic and manual analysis of the body. The complete interface is represented using an extended algebraic signature notation. Once the signature has been discovered, it can be reformulated so that a collection of well-defined, static mappings are established between it and the signature of the unit's formal specification. These mappings guide the development of test artifact transformers and oracles, which support automatic test data selection, execution and result verification. This paper discusses problems that arise as a result of testing under low encapsulation, discusses our solution using an ongoing example, and compares our solution to earlier solutions.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126677730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}