The close association between higher order functions and algorithmic skeletons is a promising source of automatic parallelisation of programs. An approach to automatically synthesizing higher order functions from functional programs through proof planning is presented Our work has been conducted within the context of a parallelising compiler for SML, with the objective of exploiting parallelism latent in potential higher order function use in programs.
{"title":"Higher order function synthesis through proof planning","authors":"Andrew Cook, Andrew Ireland, G. Michaelson","doi":"10.1109/ASE.2001.989817","DOIUrl":"https://doi.org/10.1109/ASE.2001.989817","url":null,"abstract":"The close association between higher order functions and algorithmic skeletons is a promising source of automatic parallelisation of programs. An approach to automatically synthesizing higher order functions from functional programs through proof planning is presented Our work has been conducted within the context of a parallelising compiler for SML, with the objective of exploiting parallelism latent in potential higher order function use in programs.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127199730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Formalizing security requirements has received a significant attention since the 70s. However a general method for specifying security requirements is still missing. Especially, little work has been presented on specifying and verifying that a given application is a secure resource consumer The purpose of this work is to set up a methodology for (1) specifying security requirements of service providers and (2) proving that some application securely uses some resources. The developed theory will be evaluated and applied in two different areas: secure mobile code development and secure COTS-based software development.
{"title":"Security specification and verification","authors":"P. Fenkam","doi":"10.1109/ASE.2001.989847","DOIUrl":"https://doi.org/10.1109/ASE.2001.989847","url":null,"abstract":"Formalizing security requirements has received a significant attention since the 70s. However a general method for specifying security requirements is still missing. Especially, little work has been presented on specifying and verifying that a given application is a secure resource consumer The purpose of this work is to set up a methodology for (1) specifying security requirements of service providers and (2) proving that some application securely uses some resources. The developed theory will be evaluated and applied in two different areas: secure mobile code development and secure COTS-based software development.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127208866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays component-based technologies offer straightforward ways of building applications from existing components. Although these technologies might differ in terms of the level of heterogeneity among components they support, e.g. CORBA or COM versus J2EE, they all suffer the problem of dynamic integration. That is, once components are successfully integrated in a uniform context how is it possible to check, control and assess that the dynamic behavior of the resulting application will not deadlock? The authors propose an architectural, connector-based approach to this problem. We compose a system in such a way that it is possible to check whether and why the system deadlocks. Depending on the kind of deadlock, we have a strategy that automatically operates on the connector part of the system architecture in order to obtain a suitably equivalent version of the system which is deadlock-free.
{"title":"Connectors synthesis for deadlock-free component based architectures","authors":"P. Inverardi, Simone Scriboni","doi":"10.1109/ASE.2001.989803","DOIUrl":"https://doi.org/10.1109/ASE.2001.989803","url":null,"abstract":"Nowadays component-based technologies offer straightforward ways of building applications from existing components. Although these technologies might differ in terms of the level of heterogeneity among components they support, e.g. CORBA or COM versus J2EE, they all suffer the problem of dynamic integration. That is, once components are successfully integrated in a uniform context how is it possible to check, control and assess that the dynamic behavior of the resulting application will not deadlock? The authors propose an architectural, connector-based approach to this problem. We compose a system in such a way that it is possible to check whether and why the system deadlocks. Depending on the kind of deadlock, we have a strategy that automatically operates on the connector part of the system architecture in order to obtain a suitably equivalent version of the system which is deadlock-free.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127409783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be "re-inventing the wheel". Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.
{"title":"Identification of high-level concept clones in source code","authors":"Andrian Marcus, Jonathan I. Maletic","doi":"10.1109/ASE.2001.989796","DOIUrl":"https://doi.org/10.1109/ASE.2001.989796","url":null,"abstract":"Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be \"re-inventing the wheel\". Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121804622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Ledru, L. D. Bousquet, Pierre Bontron, Olivier Maury, Catherine Oriat, Marie-Laure Potet
Nowadays, test cases may correspond to elaborate programs. It is therefore sensible to try to specify test cases in order to get a more abstract view of these. This paper explores the notion of test purpose as a way to specify a set of test cases. It shows how test purposes are exploited today by several tools that automate the generation of test cases. It presents the major relations that link test purposes, test cases and reference specification. It also explores the similarities and differences between the specification of test cases, and the specification of programs. This opens perspectives for the synthesis and the verification of test cases, and for other activities like test case retrieval.
{"title":"Test purposes: adapting the notion of specification to testing","authors":"Y. Ledru, L. D. Bousquet, Pierre Bontron, Olivier Maury, Catherine Oriat, Marie-Laure Potet","doi":"10.1109/ASE.2001.989798","DOIUrl":"https://doi.org/10.1109/ASE.2001.989798","url":null,"abstract":"Nowadays, test cases may correspond to elaborate programs. It is therefore sensible to try to specify test cases in order to get a more abstract view of these. This paper explores the notion of test purpose as a way to specify a set of test cases. It shows how test purposes are exploited today by several tools that automate the generation of test cases. It presents the major relations that link test purposes, test cases and reference specification. It also explores the similarities and differences between the specification of test cases, and the specification of programs. This opens perspectives for the synthesis and the verification of test cases, and for other activities like test case retrieval.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114362464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mutation analysis inserts faults into a program to create test sets that distinguish the mutant from the original program. Inserted faults must represent plausible errors. Standard transformations can mutate scalar values such as integers, floats, and character data. Mutating objects is an open problem, because object semantics are defined by the programmer and can vary widely. We develop mutation operators and support tools that can mutate Java library items that are heavily used in commercial software. Our mutation engine can support reusable libraries of mutation components to inject faults into objects that instantiate items from these common Java libraries. Our technique should be effective for evaluating real-world software testing suites.
{"title":"A technique for mutation of Java objects","authors":"J. Bieman, Sudipto Ghosh, R. Alexander","doi":"10.1109/ASE.2001.989824","DOIUrl":"https://doi.org/10.1109/ASE.2001.989824","url":null,"abstract":"Mutation analysis inserts faults into a program to create test sets that distinguish the mutant from the original program. Inserted faults must represent plausible errors. Standard transformations can mutate scalar values such as integers, floats, and character data. Mutating objects is an open problem, because object semantics are defined by the programmer and can vary widely. We develop mutation operators and support tools that can mutate Java library items that are heavily used in commercial software. Our mutation engine can support reusable libraries of mutation components to inject faults into objects that instantiate items from these common Java libraries. Our technique should be effective for evaluating real-world software testing suites.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131941970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Unified Modeling Language has become widely accepted as a standard in software development. Several tools have been produced to support UML model validation. These tools translate a UML model into a validation language such as PROMELA. However they have some shortcomings: there is no proof of correctness (with respect to the UML semantics) for these tools; and there is no tool that supports validation for both the static and dynamic aspects of a UML model. In order to overcome these shortcomings, we present a toolset which is based on the semantic model using abstract state machines. Since the toolset is derived from the semantic model, the toolset is correct with respect to the semantic model. In addition, this toolset can be used to validate both the static and dynamic aspects of a model.
{"title":"A UML validation toolset based on abstract state machines","authors":"Wuwei Shen, K. Compton, J. Huggins","doi":"10.1109/ASE.2001.989819","DOIUrl":"https://doi.org/10.1109/ASE.2001.989819","url":null,"abstract":"The Unified Modeling Language has become widely accepted as a standard in software development. Several tools have been produced to support UML model validation. These tools translate a UML model into a validation language such as PROMELA. However they have some shortcomings: there is no proof of correctness (with respect to the UML semantics) for these tools; and there is no tool that supports validation for both the static and dynamic aspects of a UML model. In order to overcome these shortcomings, we present a toolset which is based on the semantic model using abstract state machines. Since the toolset is derived from the semantic model, the toolset is correct with respect to the semantic model. In addition, this toolset can be used to validate both the static and dynamic aspects of a model.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133095133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a mechanizable framework for specifying, developing, and reasoning about complex systems. The framework combines features from algebraic specifications, abstract state machines, and refinement calculus, all couched in a categorical setting. In particular, we show how to extend algebraic specifications to evolving specifications (especs) in such a way that composition and refinement operations extend to capture the dynamics of evolving, adaptive, and self-adaptive software development, while remaining efficiently computable. The framework is partially implemented in the Epoxi system.
{"title":"Composition and refinement of behavioral specifications","authors":"Dusko Pavlovic, Douglas R. Smith","doi":"10.1109/ASE.2001.989801","DOIUrl":"https://doi.org/10.1109/ASE.2001.989801","url":null,"abstract":"This paper presents a mechanizable framework for specifying, developing, and reasoning about complex systems. The framework combines features from algebraic specifications, abstract state machines, and refinement calculus, all couched in a categorical setting. In particular, we show how to extend algebraic specifications to evolving specifications (especs) in such a way that composition and refinement operations extend to capture the dynamics of evolving, adaptive, and self-adaptive software development, while remaining efficiently computable. The framework is partially implemented in the Epoxi system.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114719759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Given a program and a specification, you may want to verify mechanically and efficiently that this program satisfies the specification. Software verification techniques typically involve theorem proving. If a formal specification is easily available, consumption of computational resources is a major issue. Meanwhile, we shall not overlook the psychological factors. Often, you need extra expertise to verify a program. Tools that can automatically verify programs are helpful. On the other hand, ubiquitous computing has made the correctness of a program both a security and a performance issue. If you run a piece of mobile code on your machine, you will expect that the code does not access storages unlawfully. To make sure bad things won't happen, performance is sacrificed. If programs are written in an intermediate language that is able to capture and verify properties mentioned above, your host machine will benefit from it. This paper focuses on providing a type-theoretic solution to the verification of mobile programs. One of our primary tools is index types. Index types are a form of non-traditional types. An index type system extends the type system of a language with indices and predicates on those indices. Index types can express properties of program. To type check a program annotated with index types, we often will call an external decision procedure. Another concept used is the proof-carrying code. One of the major advantages of proof-carrying code is that a lot of theorem proving is shifted offline. When we use proof-carrying code to verify a property, the time spent on verification is mainly on proof-checking, which is considerably cheaper than theorem proving.
{"title":"Verify properties of mobile code","authors":"Songtao Xia","doi":"10.1109/ASE.2001.989853","DOIUrl":"https://doi.org/10.1109/ASE.2001.989853","url":null,"abstract":"Summary form only given. Given a program and a specification, you may want to verify mechanically and efficiently that this program satisfies the specification. Software verification techniques typically involve theorem proving. If a formal specification is easily available, consumption of computational resources is a major issue. Meanwhile, we shall not overlook the psychological factors. Often, you need extra expertise to verify a program. Tools that can automatically verify programs are helpful. On the other hand, ubiquitous computing has made the correctness of a program both a security and a performance issue. If you run a piece of mobile code on your machine, you will expect that the code does not access storages unlawfully. To make sure bad things won't happen, performance is sacrificed. If programs are written in an intermediate language that is able to capture and verify properties mentioned above, your host machine will benefit from it. This paper focuses on providing a type-theoretic solution to the verification of mobile programs. One of our primary tools is index types. Index types are a form of non-traditional types. An index type system extends the type system of a language with indices and predicates on those indices. Index types can express properties of program. To type check a program annotated with index types, we often will call an external decision procedure. Another concept used is the proof-carrying code. One of the major advantages of proof-carrying code is that a lot of theorem proving is shifted offline. When we use proof-carrying code to verify a property, the time spent on verification is mainly on proof-checking, which is considerably cheaper than theorem proving.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129427562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software engineers building a complex system make use of a number of informal and semi-formal notations. We describe a framework, xlinkit, for managing the consistency of development artifacts expressed in such notations. xlinkit supports distributed software engineering by providing a distribution-transparent language for expressing constraints between specifications. It specifies a semantics for those constraints that permits the generation of hyperlinks between inconsistent elements. We give a formal semantics for link generation, and show how we expressed the rules of the UML foundation/core modules in our language. We outline how we implemented xlinkit as a light-weight web service using open standard technology and present the results of an evaluation against several sizeable UML models provided by industrial partners.
{"title":"Static consistency checking for distributed specifications","authors":"Christian Nentwich, W. Emmerich, A. Finkelstein","doi":"10.1109/ASE.2001.989797","DOIUrl":"https://doi.org/10.1109/ASE.2001.989797","url":null,"abstract":"Software engineers building a complex system make use of a number of informal and semi-formal notations. We describe a framework, xlinkit, for managing the consistency of development artifacts expressed in such notations. xlinkit supports distributed software engineering by providing a distribution-transparent language for expressing constraints between specifications. It specifies a semantics for those constraints that permits the generation of hyperlinks between inconsistent elements. We give a formal semantics for link generation, and show how we expressed the rules of the UML foundation/core modules in our language. We outline how we implemented xlinkit as a light-weight web service using open standard technology and present the results of an evaluation against several sizeable UML models provided by industrial partners.","PeriodicalId":433615,"journal":{"name":"Proceedings 16th Annual International Conference on Automated Software Engineering (ASE 2001)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134421567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}