The paper presents a novel approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. Our approach is based on consistency techniques integrating integer and float variables. We handle statement, branch and path coverage criteria. Our purpose is thus to automatically generate test data that will cause the program to execute a statement, to traverse a branch or to traverse a specified path. For path coverage, the specified path is transformed into a path constraint which is solved by an interval-based constraint solving algorithm handling integer, boolean and real variables. A valid test input is then extracted from the interval solutions. For statement (and branch) coverage, a path reaching the specified statement or branch is dynamically constructed. Our algorithm for path coverage is then applied. The search for a suitable path and the solving of path constraints make extensive use of consistency techniques. We propose a simple consistency notion called eBox consistency, for generalizing box consistency to integer and float variables. The eBox consistency is sufficient for our purpose. A prototype has been developed and experimental results show the feasibility of our approach. This work is an extension of work by A. Gotlieb (2000) for float and boolean variables.
{"title":"Automatic test data generation for programs with integer and float variables","authors":"Nguyen Tran Sy, Y. Deville","doi":"10.1109/ASE.2001.989786","DOIUrl":"https://doi.org/10.1109/ASE.2001.989786","url":null,"abstract":"The paper presents a novel approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. Our approach is based on consistency techniques integrating integer and float variables. We handle statement, branch and path coverage criteria. Our purpose is thus to automatically generate test data that will cause the program to execute a statement, to traverse a branch or to traverse a specified path. For path coverage, the specified path is transformed into a path constraint which is solved by an interval-based constraint solving algorithm handling integer, boolean and real variables. A valid test input is then extracted from the interval solutions. For statement (and branch) coverage, a path reaching the specified statement or branch is dynamically constructed. Our algorithm for path coverage is then applied. The search for a suitable path and the solving of path constraints make extensive use of consistency techniques. We propose a simple consistency notion called eBox consistency, for generalizing box consistency to integer and float variables. The eBox consistency is sufficient for our purpose. A prototype has been developed and experimental results show the feasibility of our approach. This work is an extension of work by A. Gotlieb (2000) for float and boolean variables.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125344441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the development of a Java bytecode optimisation algorithm by the methodology of program extraction. We develop the algorithm as a collection of proofs and definitions in the Coq proof assistant, and then use Coq's extraction mechanism to automatically generate a program in OCaml. The extraction methodology guarantees that this program is correct. We discuss the feasibility of the methodology and suggest some improvements that could be made.
{"title":"The synthesis of a Java card tokenisation algorithm","authors":"E. Denney","doi":"10.1109/ASE.2001.989789","DOIUrl":"https://doi.org/10.1109/ASE.2001.989789","url":null,"abstract":"We describe the development of a Java bytecode optimisation algorithm by the methodology of program extraction. We develop the algorithm as a collection of proofs and definitions in the Coq proof assistant, and then use Coq's extraction mechanism to automatically generate a program in OCaml. The extraction methodology guarantees that this program is correct. We discuss the feasibility of the methodology and suggest some improvements that could be made.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116268253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Whittle, J. V. Baalen, J. Schumann, P. Robinson, T. Pressburger, J. Penix, Phil Oh, M. Lowry, G. Brat
Previous work on domain-specific deductive program synthesis described the Amphion/NAIF system for generating Fortran code from high-level graphical specifications describing problems in space system geometry. Amphion/NAIF specifications describe input-output functions that compute geometric quantities (e.g., the distance between two planets at a point in time, or the time when a radio communication path between a spacecraft and earth is occluded) by composing together Fortran subroutines from the NAIF subroutine library developed at the Jet Propulsion Laboratory. In essence, Amphion/NAIF synthesizes code for glueing together the NAIF components in a way such that the generated code implements the specification, with a concurrently generated proof that this implementation is correct. Amphion/NAIF demonstrated the success of domain-specific deductive program synthesis and is still in use today within the space science community. However, a number of questions remained open that we will attempt to answer in this paper.
{"title":"Amphion/NAV: deductive synthesis of state estimation software","authors":"J. Whittle, J. V. Baalen, J. Schumann, P. Robinson, T. Pressburger, J. Penix, Phil Oh, M. Lowry, G. Brat","doi":"10.1109/ASE.2001.989837","DOIUrl":"https://doi.org/10.1109/ASE.2001.989837","url":null,"abstract":"Previous work on domain-specific deductive program synthesis described the Amphion/NAIF system for generating Fortran code from high-level graphical specifications describing problems in space system geometry. Amphion/NAIF specifications describe input-output functions that compute geometric quantities (e.g., the distance between two planets at a point in time, or the time when a radio communication path between a spacecraft and earth is occluded) by composing together Fortran subroutines from the NAIF subroutine library developed at the Jet Propulsion Laboratory. In essence, Amphion/NAIF synthesizes code for glueing together the NAIF components in a way such that the generated code implements the specification, with a concurrently generated proof that this implementation is correct. Amphion/NAIF demonstrated the success of domain-specific deductive program synthesis and is still in use today within the space science community. However, a number of questions remained open that we will attempt to answer in this paper.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116362894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the lazy strategy implemented in a compiler of cryptographic protocols, Casrul. The purpose of this compiler is to verify protocols and to translate them into rewrite rules that can be used by several kinds of automatic or semi-automatic tools for finding flaws, or proving properties. It is entirely automatic, and the efficiency of the generated rules is guaranteed because of the use of a lazy model of intruder behavior. This efficiency is illustrated on several examples.
{"title":"A tool for lazy verification of security protocols","authors":"Yannick Chevalier, L. Vigneron","doi":"10.1109/ASE.2001.989832","DOIUrl":"https://doi.org/10.1109/ASE.2001.989832","url":null,"abstract":"We present the lazy strategy implemented in a compiler of cryptographic protocols, Casrul. The purpose of this compiler is to verify protocols and to translate them into rewrite rules that can be used by several kinds of automatic or semi-automatic tools for finding flaws, or proving properties. It is entirely automatic, and the efficiency of the generated rules is guaranteed because of the use of a lazy model of intruder behavior. This efficiency is illustrated on several examples.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130314355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However, we show that the process converges to a fixed point at which time the partial order information is safe and the whole state space is explored.
{"title":"Combining static analysis and model checking for software analysis","authors":"G. Brat, W. Visser","doi":"10.1109/ASE.2001.989812","DOIUrl":"https://doi.org/10.1109/ASE.2001.989812","url":null,"abstract":"We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However, we show that the process converges to a fixed point at which time the partial order information is safe and the whole state space is explored.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130341117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to understand, analyze and modify software, we commonly examine and manipulate its architecture. For example, we may want to examine the architecture at different levels of abstraction. We can view such manipulations as architectural transformations, and more specifically, as graph transformations. We evaluate relational algebra as a way of specifying and automating the architectural transformations. Specifically, we examine Grok, a relational calculator that is part of the PBS toolkit. We show that relational algebra is practical in that we are able to specify many of the transformations commonly occurring during software maintenance and, using a tool like Grok, we are able to manipulate, quite efficiently, large software graphs; this is a "win". However, this approach is not well suited to express some types of transforms involving patterns of edges and nodes; this is a "loss". By means of a set of examples, the paper makes clear when the approach wins and when it loses.
{"title":"Wins and losses of algebraic transformations of software architectures","authors":"Hoda Fahmy, R. Holt, J. Cordy","doi":"10.1109/ASE.2001.989790","DOIUrl":"https://doi.org/10.1109/ASE.2001.989790","url":null,"abstract":"In order to understand, analyze and modify software, we commonly examine and manipulate its architecture. For example, we may want to examine the architecture at different levels of abstraction. We can view such manipulations as architectural transformations, and more specifically, as graph transformations. We evaluate relational algebra as a way of specifying and automating the architectural transformations. Specifically, we examine Grok, a relational calculator that is part of the PBS toolkit. We show that relational algebra is practical in that we are able to specify many of the transformations commonly occurring during software maintenance and, using a tool like Grok, we are able to manipulate, quite efficiently, large software graphs; this is a \"win\". However, this approach is not well suited to express some types of transforms involving patterns of edges and nodes; this is a \"loss\". By means of a set of examples, the paper makes clear when the approach wins and when it loses.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134240705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of the Unified Modelling Language (UML) during systems development has been growing in scale and complexity, often resulting in inconsistent specifications. We present a knowledge base goal-driven approach for consistency management of UML specifications represented as axioms which define goals. We propose an inference procedure as a flexible pattern-based abduction used to build and morph paths based on the specifications. The approach involves a two-step interaction process between the specifications: observation and comparison. Prototypes of the knowledge base engine and of a tool to map UML specifications in XMI format (eXtensible Metadata Interchange) to the knowledge base have been developed to demonstrate and evaluate the approach.
{"title":"Knowledge base approach to consistency management of UML specifications","authors":"A. Zisman, A. Kozlenkov","doi":"10.1109/ASE.2001.989829","DOIUrl":"https://doi.org/10.1109/ASE.2001.989829","url":null,"abstract":"The use of the Unified Modelling Language (UML) during systems development has been growing in scale and complexity, often resulting in inconsistent specifications. We present a knowledge base goal-driven approach for consistency management of UML specifications represented as axioms which define goals. We propose an inference procedure as a flexible pattern-based abduction used to build and morph paths based on the specifications. The approach involves a two-step interaction process between the specifications: observation and comparison. Prototypes of the knowledge base engine and of a tool to map UML specifications in XMI format (eXtensible Metadata Interchange) to the knowledge base have been developed to demonstrate and evaluate the approach.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123956556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are currently witnessing an important paradigm shift in information system construction, namely the move from object and component technology to model technology. The object technology revolution has allowed the replacement of the over twenty-year-old step-wise procedural decomposition paradigm with the more fashionable object composition paradigm. Surprisingly, this evolution seems to have triggered another even more radical change, the current trend toward model transformation. A concrete example is the Object Management Group's rapid move from its previous Object Management Architecture vision to the latest Model-Driven Architecture. This paper proposes an interpretation of this evolution through abstract investigation. In order to stay as language-independent as possible, we have employed the neutral formalism of Sowa's conceptual graphs to describe the various situations characterizing this organization. This will allow us to identify potential problems in the proposed modeling framework and suggest some possible solutions.
{"title":"Towards a precise definition of the OMG/MDA framework","authors":"J. Bézivin, Olivier Gerbé","doi":"10.1109/ASE.2001.989813","DOIUrl":"https://doi.org/10.1109/ASE.2001.989813","url":null,"abstract":"We are currently witnessing an important paradigm shift in information system construction, namely the move from object and component technology to model technology. The object technology revolution has allowed the replacement of the over twenty-year-old step-wise procedural decomposition paradigm with the more fashionable object composition paradigm. Surprisingly, this evolution seems to have triggered another even more radical change, the current trend toward model transformation. A concrete example is the Object Management Group's rapid move from its previous Object Management Architecture vision to the latest Model-Driven Architecture. This paper proposes an interpretation of this evolution through abstract investigation. In order to stay as language-independent as possible, we have employed the neutral formalism of Sowa's conceptual graphs to describe the various situations characterizing this organization. This will allow us to identify potential problems in the proposed modeling framework and suggest some possible solutions.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"444 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124255391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is becoming increasingly desirable to incorporate commercial-off-the-shelf (COTS) tools as software components into larger software systems. Due to their large user base, COTS tools tend to be cheap, reasonably reliable, and functionally powerful. Reusing them as components has the benefit of significantly reducing development cost and effort. Despite these advantages, developers encounter major obstacles in integrating most COTS tools because these tools have been constructed as stand-alone applications and make assumptions about their environment that do not hold when used as part of larger software systems. Most significantly, while they frequently contain programmatic interfaces that allow other components to obtain services from them on a direct call basis, they almost always lack the notification and data synchronicity facilities required for active integration. The authors present an integration framework for adding these notification and data synchronization facilities to COTS tools so that they can be integrated as active software components into larger systems. We illustrate our integration framework through tool suites we constructed around Mathworks' Matlab/Stateflow and Rational's Rose (two widely-used, large COTS tools). Our experience to date is that it is indeed possible to transform standalone COTS tools into software components.
{"title":"Unfriendly COTS integration - instrumentation and interfaces for improved plugability","authors":"Alexander Egyed, R. Balzer","doi":"10.1109/ASE.2001.989808","DOIUrl":"https://doi.org/10.1109/ASE.2001.989808","url":null,"abstract":"It is becoming increasingly desirable to incorporate commercial-off-the-shelf (COTS) tools as software components into larger software systems. Due to their large user base, COTS tools tend to be cheap, reasonably reliable, and functionally powerful. Reusing them as components has the benefit of significantly reducing development cost and effort. Despite these advantages, developers encounter major obstacles in integrating most COTS tools because these tools have been constructed as stand-alone applications and make assumptions about their environment that do not hold when used as part of larger software systems. Most significantly, while they frequently contain programmatic interfaces that allow other components to obtain services from them on a direct call basis, they almost always lack the notification and data synchronicity facilities required for active integration. The authors present an integration framework for adding these notification and data synchronization facilities to COTS tools so that they can be integrated as active software components into larger systems. We illustrate our integration framework through tool suites we constructed around Mathworks' Matlab/Stateflow and Rational's Rose (two widely-used, large COTS tools). Our experience to date is that it is indeed possible to transform standalone COTS tools into software components.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120856213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erlang is a functional programming language with support for concurrency and message passing communication that is used at Ericsson for developing telecommunication applications. We consider the challenge of verifying temporal properties of systems programmed in Erlang with dynamically evolving process structures. To accomplish this, a rich verification framework for goal-directed, proof system-based verification is used. The paper investigates the problem of semi-automating the verification task by identifying the proof parameters crucial for successful proof search.
{"title":"Semi-automated verification of Erlang code","authors":"Lars-Åke Fredlund, D. Gurov, T. Noll","doi":"10.1109/ASE.2001.989820","DOIUrl":"https://doi.org/10.1109/ASE.2001.989820","url":null,"abstract":"Erlang is a functional programming language with support for concurrency and message passing communication that is used at Ericsson for developing telecommunication applications. We consider the challenge of verifying temporal properties of systems programmed in Erlang with dynamically evolving process structures. To accomplish this, a rich verification framework for goal-directed, proof system-based verification is used. The paper investigates the problem of semi-automating the verification task by identifying the proof parameters crucial for successful proof search.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"479 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123395557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}