Describes a notation and tool for demonstrating to a third-party certifier that software written in a subset of Ada is safe, and gives some experience of using them on real projects. The thesis underlying the design is that people write adequate code, but that they make design and implementation decisions which can conflict with each other to introduce safety problems. The usual paradigm of formally specifying and then developing and verifying the code is less cost-effective than writing the code and then abstracting it to a level that is suitable for human judgements to be made. This is because there are more people who know how to write good code than those who can write effective formal specifications. The tool processes a formal, or informal, argument that code meets its safety requirements using literate programming and concepts from the refinement calculus developed at Oxford University.
{"title":"Don't verify, abstract!","authors":"C. O'Halloran, Alf Smith","doi":"10.1109/ASE.1998.732573","DOIUrl":"https://doi.org/10.1109/ASE.1998.732573","url":null,"abstract":"Describes a notation and tool for demonstrating to a third-party certifier that software written in a subset of Ada is safe, and gives some experience of using them on real projects. The thesis underlying the design is that people write adequate code, but that they make design and implementation decisions which can conflict with each other to introduce safety problems. The usual paradigm of formally specifying and then developing and verifying the code is less cost-effective than writing the code and then abstracting it to a level that is suitable for human judgements to be made. This is because there are more people who know how to write good code than those who can write effective formal specifications. The tool processes a formal, or informal, argument that code meets its safety requirements using literate programming and concepts from the refinement calculus developed at Oxford University.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116505850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most efforts to automate the formal verification of communicating systems have centred around finite-state systems (FSSs). However, FSSs are incapable of modelling many practical communicating systems, and hence there is interest in a novel class of problems, which we call VIPSs (Value-passing Infinite-state Parameterised Systems). Existing approaches using model checking over FSSs are insufficient for VIPSs, due to their inability both to reason with and about domain-specific theories, and to cope with systems having an unbounded or arbitrary state space. We use the Calculus of Communicating Systems (CCS) with parameterised constants to express and specify VIPSs. We use the laws of CCS to conduct the verification task. This approach allows us to study communicating systems, regardless of their state space, and the data such systems communicate. Automating theorem proving in this system is an extremely difficult task. We provide automated methods for CCS analysis; they are applicable to both FSSs and VIPSs. Adding these methods to the Clam proof-planner, we have implemented an automated theorem prover that is capable of dealing with problems outside the scope of current methods. This paper describes these methods, gives an account as to why they work and provides a short summary of experimental results.
{"title":"Planning equational verification in CCS","authors":"R. Monroy, A. Bundy, Ian Green","doi":"10.1109/ASE.1998.732569","DOIUrl":"https://doi.org/10.1109/ASE.1998.732569","url":null,"abstract":"Most efforts to automate the formal verification of communicating systems have centred around finite-state systems (FSSs). However, FSSs are incapable of modelling many practical communicating systems, and hence there is interest in a novel class of problems, which we call VIPSs (Value-passing Infinite-state Parameterised Systems). Existing approaches using model checking over FSSs are insufficient for VIPSs, due to their inability both to reason with and about domain-specific theories, and to cope with systems having an unbounded or arbitrary state space. We use the Calculus of Communicating Systems (CCS) with parameterised constants to express and specify VIPSs. We use the laws of CCS to conduct the verification task. This approach allows us to study communicating systems, regardless of their state space, and the data such systems communicate. Automating theorem proving in this system is an extremely difficult task. We provide automated methods for CCS analysis; they are applicable to both FSSs and VIPSs. Adding these methods to the Clam proof-planner, we have implemented an automated theorem prover that is capable of dealing with problems outside the scope of current methods. This paper describes these methods, gives an account as to why they work and provides a short summary of experimental results.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129123836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the last three years the authors have been building an instantiation of a system development paradigm, called ARTS. The paradigm consists of a view of what a system development environment is, in general terms, and a methodology for instantiating the paradigm for particular and specific domains of application. The motivation for and the explanation of the paradigm are derived from extant epistemological models of the method of natural science. They assert that these models are directly applicable to the domain of software and systems construction, and that, from them, one can derive principles and explanations for what a software development environment should be. They present briefly the statement view of scientific theories, a conceptual architecture for software development environments whose rationale is given in terms of the statement view and some examples of how the present version of ARTS realises this conceptual architecture.
{"title":"The very idea of software development environments: a conceptual architecture for the arts' environment paradigm","authors":"A. Haeberer, T. Maibaum","doi":"10.1109/ASE.1998.732667","DOIUrl":"https://doi.org/10.1109/ASE.1998.732667","url":null,"abstract":"For the last three years the authors have been building an instantiation of a system development paradigm, called ARTS. The paradigm consists of a view of what a system development environment is, in general terms, and a methodology for instantiating the paradigm for particular and specific domains of application. The motivation for and the explanation of the paradigm are derived from extant epistemological models of the method of natural science. They assert that these models are directly applicable to the domain of software and systems construction, and that, from them, one can derive principles and explanations for what a software development environment should be. They present briefly the statement view of scientific theories, a conceptual architecture for software development environments whose rationale is given in terms of the statement view and some examples of how the present version of ARTS realises this conceptual architecture.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133696408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive models of software comprehension are potential sources of theoretical knowledge for tool designers. Although their use in the analysis of existing tools is fairly well established, the literature has shown only limited use of such models for directly developing design ideas. This paper suggests a way of utilizing existing cognitive models of software comprehension to generate design goals and suggest design strategies early in the development cycle. A crucial part of our method is a scheme for explaining the value of tool features by describing the mechanisms that are presumed to underly the expected improvements in task performance.
{"title":"Developing the designer's toolkit with software comprehension models","authors":"Andrew Walenstein","doi":"10.1109/ASE.1998.732687","DOIUrl":"https://doi.org/10.1109/ASE.1998.732687","url":null,"abstract":"Cognitive models of software comprehension are potential sources of theoretical knowledge for tool designers. Although their use in the analysis of existing tools is fairly well established, the literature has shown only limited use of such models for directly developing design ideas. This paper suggests a way of utilizing existing cognitive models of software comprehension to generate design goals and suggest design strategies early in the development cycle. A crucial part of our method is a scheme for explaining the value of tool features by describing the mechanisms that are presumed to underly the expected improvements in task performance.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"14 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127593340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The issues of re-engineering and reverse engineering have become important ones in the computing industry. A legacy system that has evolved has usually been worked on by many different programmers and reflects the different programming styles as practised by those programmers. We address the re-engineering of a large system, the TAMPR automatic program transformation system, written in pure Lisp. TAMPR is an essential tool in ongoing research on potential applications of automated program transformation. The program implementing the TAMPR system is better designed and more consistently coded than most legacy systems. Why, then, is reverse engineering being attempted for this system, given that it suffers few of the problems of more traditionally implemented legacy systems? We are interested in studying the problem of abstraction in reverse engineering, and the TAMPR system, with its systematic design and coding, provides a good starting point for studying approaches to automated abstraction to an object-oriented form. In addition, while the system in its present form meets the current needs of its users, there are problems with providing widely available, efficient implementations of the system. The target language for this experiment in reverse engineering is Java. Java was chosen because of its widespread availability, claimed portability and its integration with components for the construction of graphical user interfaces. We use TAMPR transformations to reverse engineer the TAMPR program itself.
{"title":"Brewing fresh Java from legacy Lisp-an experiment in automated reverse engineering","authors":"T. Harmer, J. M. Boyle","doi":"10.1109/ASE.1998.732689","DOIUrl":"https://doi.org/10.1109/ASE.1998.732689","url":null,"abstract":"The issues of re-engineering and reverse engineering have become important ones in the computing industry. A legacy system that has evolved has usually been worked on by many different programmers and reflects the different programming styles as practised by those programmers. We address the re-engineering of a large system, the TAMPR automatic program transformation system, written in pure Lisp. TAMPR is an essential tool in ongoing research on potential applications of automated program transformation. The program implementing the TAMPR system is better designed and more consistently coded than most legacy systems. Why, then, is reverse engineering being attempted for this system, given that it suffers few of the problems of more traditionally implemented legacy systems? We are interested in studying the problem of abstraction in reverse engineering, and the TAMPR system, with its systematic design and coding, provides a good starting point for studying approaches to automated abstraction to an object-oriented form. In addition, while the system in its present form meets the current needs of its users, there are problems with providing widely available, efficient implementations of the system. The target language for this experiment in reverse engineering is Java. Java was chosen because of its widespread availability, claimed portability and its integration with components for the construction of graphical user interfaces. We use TAMPR transformations to reverse engineer the TAMPR program itself.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121922914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Statically analyzing requirements specifications to assure that they possess desirable properties is an important activity in any rigorous software development project. The analysis is performed on an abstraction of the original requirements specification. Abstractions in the model may lead to spurious errors in the analysis output. Spurious errors are conditions that are reported as errors, but information abstracted out of the model precludes the reported conditions from being satisfied. A high ratio of spurious errors to true errors in the analysis output makes it difficult, error-prone, and time consuming to find and correct the true errors. We describe an iterative and integrative approach for analyzing state-based requirements that capitalizes on the strengths of a symbolic analysis component and a reasoning component while circumventing their weaknesses. The resulting analysis method is fast enough and automated enough to be used on a day-to-day basis by practicing engineers, and generates analysis reports with a small ratio of spurious errors to true errors.
{"title":"Automated integrative analysis of state-based requirements","authors":"Barbara J. Czerny, M. Heimdahl","doi":"10.1109/ASE.1998.732601","DOIUrl":"https://doi.org/10.1109/ASE.1998.732601","url":null,"abstract":"Statically analyzing requirements specifications to assure that they possess desirable properties is an important activity in any rigorous software development project. The analysis is performed on an abstraction of the original requirements specification. Abstractions in the model may lead to spurious errors in the analysis output. Spurious errors are conditions that are reported as errors, but information abstracted out of the model precludes the reported conditions from being satisfied. A high ratio of spurious errors to true errors in the analysis output makes it difficult, error-prone, and time consuming to find and correct the true errors. We describe an iterative and integrative approach for analyzing state-based requirements that capitalizes on the strengths of a symbolic analysis component and a reasoning component while circumventing their weaknesses. The resulting analysis method is fast enough and automated enough to be used on a day-to-day basis by practicing engineers, and generates analysis reports with a small ratio of spurious errors to true errors.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114979215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large software systems often keep log files of events. Such log files can be analyzed to check whether a run of a program reveals faults in the system. We discuss how such log files can be used in software testing. We present a framework for automatically analyzing log files, and describe a language for specifying analyzer programs and an implementation of that language. The language permits compositional, compact specifications of software, which act as test oracles; we discuss the use and efficacy of these oracles for unit- and system-level testing in various settings. We explore methodological issues such as efficiency and logging policies, and the scope and limitations of the framework. We conclude that testing using log file analysis constitutes a useful methodology for software verification, somewhere between current testing practice and formal verification methodologies.
{"title":"Testing using log file analysis: tools, methods, and issues","authors":"J. Andrews","doi":"10.1109/ASE.1998.732614","DOIUrl":"https://doi.org/10.1109/ASE.1998.732614","url":null,"abstract":"Large software systems often keep log files of events. Such log files can be analyzed to check whether a run of a program reveals faults in the system. We discuss how such log files can be used in software testing. We present a framework for automatically analyzing log files, and describe a language for specifying analyzer programs and an implementation of that language. The language permits compositional, compact specifications of software, which act as test oracles; we discuss the use and efficacy of these oracles for unit- and system-level testing in various settings. We explore methodological issues such as efficiency and logging policies, and the scope and limitations of the framework. We conclude that testing using log file analysis constitutes a useful methodology for software verification, somewhere between current testing practice and formal verification methodologies.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128538888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The application of empirical knowledge about the environment-dependent software development process is mostly based on heuristics. In this paper, we show how one can express these heuristics by using a tailored fuzzy expert system. Metrics are used as input, enabling a prediction for a related quality factor like correctness, defined as the inverse of criticality or error-proneness. By using genetic algorithms, we are able to extract the complete fuzzy expert system out of the available data of a finished project. We describe its application for the next project executed in the same development environment. As an example, we use complexity metrics which are used to predict the error-proneness of software modules. The feasibility and effectiveness of the approach is demonstrated with results from large switching system software projects. We present a summary of the lessons learned and give our ideas about further applications of the approach.
{"title":"Automated knowledge acquisition and application for software development projects","authors":"E. Baisch, Thomas Liedtke","doi":"10.1109/ASE.1998.732686","DOIUrl":"https://doi.org/10.1109/ASE.1998.732686","url":null,"abstract":"The application of empirical knowledge about the environment-dependent software development process is mostly based on heuristics. In this paper, we show how one can express these heuristics by using a tailored fuzzy expert system. Metrics are used as input, enabling a prediction for a related quality factor like correctness, defined as the inverse of criticality or error-proneness. By using genetic algorithms, we are able to extract the complete fuzzy expert system out of the available data of a finished project. We describe its application for the next project executed in the same development environment. As an example, we use complexity metrics which are used to predict the error-proneness of software modules. The feasibility and effectiveness of the approach is demonstrated with results from large switching system software projects. We present a summary of the lessons learned and give our ideas about further applications of the approach.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"426 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123148643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main factors that affect software understanding are the complexity of the problem solved by the program, the program text, the user's mental ability and experience and the task being performed. The paper describes a planning approach solution to the software understanding problem that focuses on the user's task and expertise. First, user questions about software artifacts have been studied and the most commonly asked questions are identified. These questions are organized into a question model and procedures for answering them are developed. Then, the patterns in user questions while performing certain tasks have been studied and these patterns are used to build generic task models. The explanation system uses these task models in several ways. The task model, along with a user model, is used to generate explanations tailored to the user's task and expertise. In addition, the task model allows the system to provide explicit task support in its interface.
{"title":"Task oriented software understanding","authors":"Ali Erdem, W. Johnson, S. Marsella","doi":"10.1109/ASE.1998.732658","DOIUrl":"https://doi.org/10.1109/ASE.1998.732658","url":null,"abstract":"The main factors that affect software understanding are the complexity of the problem solved by the program, the program text, the user's mental ability and experience and the task being performed. The paper describes a planning approach solution to the software understanding problem that focuses on the user's task and expertise. First, user questions about software artifacts have been studied and the most commonly asked questions are identified. These questions are organized into a question model and procedures for answering them are developed. Then, the patterns in user questions while performing certain tasks have been studied and these patterns are used to build generic task models. The explanation system uses these task models in several ways. The task model, along with a user model, is used to generate explanations tailored to the user's task and expertise. In addition, the task model allows the system to provide explicit task support in its interface.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper is concerned with those properties of software that can be statically surmised from the source code. Many such properties have been extensively studied from the perspective of compiler construction technology. However, live variable analysis, alias analysis and the such are too low level to be of interest to the software engineer. The authors identify a family of statically checkable properties that should represent a higher level abstraction, and reach the detailed design level. Properties in this family which is defined by five precise distinguishing criteria are called traits. Some examples of traits include mutability, const correctness, ownership, and pure functions. In fact, in many ways, traits are non-standard types. They argue that traits should bring about similar benefits to these of static typing in terms of clarity, understandability, adherence to design decisions, and robustness. They further argue that traits can be used for better checking of substitutability in inheritance relationships. Having made the case for traits, they proceed to describing a taxonomy for classifying and understanding traits and show how it can be used to better understand previous work on this topic. The paper also discusses the abstract computational complexity of traits and compares previous research from that perspective.
{"title":"Statically checkable design level traits","authors":"J. Gil, Y. Eckel","doi":"10.1109/ASE.1998.732651","DOIUrl":"https://doi.org/10.1109/ASE.1998.732651","url":null,"abstract":"The paper is concerned with those properties of software that can be statically surmised from the source code. Many such properties have been extensively studied from the perspective of compiler construction technology. However, live variable analysis, alias analysis and the such are too low level to be of interest to the software engineer. The authors identify a family of statically checkable properties that should represent a higher level abstraction, and reach the detailed design level. Properties in this family which is defined by five precise distinguishing criteria are called traits. Some examples of traits include mutability, const correctness, ownership, and pure functions. In fact, in many ways, traits are non-standard types. They argue that traits should bring about similar benefits to these of static typing in terms of clarity, understandability, adherence to design decisions, and robustness. They further argue that traits can be used for better checking of substitutability in inheritance relationships. Having made the case for traits, they proceed to describing a taxonomy for classifying and understanding traits and show how it can be used to better understand previous work on this topic. The paper also discusses the abstract computational complexity of traits and compares previous research from that perspective.","PeriodicalId":306519,"journal":{"name":"Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114453214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}