Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809474
S. Gnesi, D. Latella, M. Massink
Statechart diagrams provide a graphical notation for describing dynamic aspects of system behaviour within the Unified Modelling Language (UML). In this paper, we present a branching-time model-checking approach to the automatic verification of the formal correctness of UML Statechart diagram specifications. We use a formal operational semantics for building a labelled transition system (automaton) which is then used as a model to be checked against correctness requirements expressed in Action-Based Temporal Logic (ACTL). Our reference verification environment is JACK, where automata are represented in a standard format, which facilitates the use of different tools for automatic verification.
{"title":"Model checking UML Statechart diagrams using JACK","authors":"S. Gnesi, D. Latella, M. Massink","doi":"10.1109/HASE.1999.809474","DOIUrl":"https://doi.org/10.1109/HASE.1999.809474","url":null,"abstract":"Statechart diagrams provide a graphical notation for describing dynamic aspects of system behaviour within the Unified Modelling Language (UML). In this paper, we present a branching-time model-checking approach to the automatic verification of the formal correctness of UML Statechart diagram specifications. We use a formal operational semantics for building a labelled transition system (automaton) which is then used as a model to be checked against correctness requirements expressed in Action-Based Temporal Logic (ACTL). Our reference verification environment is JACK, where automata are represented in a standard format, which facilitates the use of different tools for automatic verification.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125380434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809476
A. Bondavalli, I. Mura, I. Majzik
Even though a thorough system specification improves the quality of the design, it is not sufficient to guarantee that a system will satisfy its reliability targets. Within this paper, we present an application example of one of the activities performed in the European ESPRIT project HIDE, aiming at the creation of an integrated environment where design toolsets based on UML are augmented with modeling and analysis tools for the automatic validation of the system under design. We apply an automatic transformation from UML diagrams to Timed Petri Nets for model based dependability evaluation. It allows a designer to use UML as a front-end for the specification of both the system and the user requirements, and to evaluate dependability figures of the system since the early phases of the design, thus obtaining precious clues for design refinement. The transformation completely hides the mathematical background, thus eliminating the need for a specific expertise in abstract mathematics and the tedious remodeling of the system for mathematical analysis.
{"title":"Automatic dependability analysis for supporting design decisions in UML","authors":"A. Bondavalli, I. Mura, I. Majzik","doi":"10.1109/HASE.1999.809476","DOIUrl":"https://doi.org/10.1109/HASE.1999.809476","url":null,"abstract":"Even though a thorough system specification improves the quality of the design, it is not sufficient to guarantee that a system will satisfy its reliability targets. Within this paper, we present an application example of one of the activities performed in the European ESPRIT project HIDE, aiming at the creation of an integrated environment where design toolsets based on UML are augmented with modeling and analysis tools for the automatic validation of the system under design. We apply an automatic transformation from UML diagrams to Timed Petri Nets for model based dependability evaluation. It allows a designer to use UML as a front-end for the specification of both the system and the user requirements, and to evaluate dependability figures of the system since the early phases of the design, thus obtaining precious clues for design refinement. The transformation completely hides the mathematical background, thus eliminating the need for a specific expertise in abstract mathematics and the tedious remodeling of the system for mathematical analysis.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126923672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809500
Tom Chen, A. Andrews, A. Hajjar, Charles Anderson, M. Sahinoglu
Testing behavioral models before they are released to the synthesis and logic design phase is a tedious process, to say the least. A common practice is the test-it-to-death approach in which millions or even billions of vectors are applied and the results are checked for possible bugs. The vectors applied to behavioral models include functional vectors, but the significant amount of the vectors are random in nature, including random combinations of instructions. In this paper, we present and evaluate a stopping rule that can be used to determine when to stop the current testing phase using a given testing technique. We demonstrate the use of the stopping rule on two complex VHDL models that were tested for branch coverage with 4 different testing phases. We compare savings and quality of testing both with and without using the stopping rule.
{"title":"How much testing is enough? Applying stopping rules to behavioral model testing","authors":"Tom Chen, A. Andrews, A. Hajjar, Charles Anderson, M. Sahinoglu","doi":"10.1109/HASE.1999.809500","DOIUrl":"https://doi.org/10.1109/HASE.1999.809500","url":null,"abstract":"Testing behavioral models before they are released to the synthesis and logic design phase is a tedious process, to say the least. A common practice is the test-it-to-death approach in which millions or even billions of vectors are applied and the results are checked for possible bugs. The vectors applied to behavioral models include functional vectors, but the significant amount of the vectors are random in nature, including random combinations of instructions. In this paper, we present and evaluate a stopping rule that can be used to determine when to stop the current testing phase using a given testing technique. We demonstrate the use of the stopping rule on two complex VHDL models that were tested for branch coverage with 4 different testing phases. We compare savings and quality of testing both with and without using the stopping rule.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129405245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809478
J. Kirby, M. Archer, C. Heitmeyer
SCR (Software Cost Reduction) is a formal method for specifying and analyzing system requirements that has previously been applied to control systems. This paper describes a case study in which the SCR method was used to specify and analyze a different class of system, a cryptographic system called CD, which must satisfy a large set of security properties. The paper describes how a suite of tools supporting SCR-a consistency checker, simulator, model checker, invariant generator, theorem prover, and validity checker-were used to detect errors in the SCR specification of CD and to verify that the specification satisfies seven security properties. The paper also describes issues of concern to software developers about formal methods, e.g. ease of use, cost-effectiveness, scalability, how to translate a prose specification into a formal notation, and what process to follow in applying a formal method and discusses these issues based on our experience with CD. Also described are some unexpected results of our case study.
{"title":"Applying formal methods to an information security device: An experience report","authors":"J. Kirby, M. Archer, C. Heitmeyer","doi":"10.1109/HASE.1999.809478","DOIUrl":"https://doi.org/10.1109/HASE.1999.809478","url":null,"abstract":"SCR (Software Cost Reduction) is a formal method for specifying and analyzing system requirements that has previously been applied to control systems. This paper describes a case study in which the SCR method was used to specify and analyze a different class of system, a cryptographic system called CD, which must satisfy a large set of security properties. The paper describes how a suite of tools supporting SCR-a consistency checker, simulator, model checker, invariant generator, theorem prover, and validity checker-were used to detect errors in the SCR specification of CD and to verify that the specification satisfies seven security properties. The paper also describes issues of concern to software developers about formal methods, e.g. ease of use, cost-effectiveness, scalability, how to translate a prose specification into a formal notation, and what process to follow in applying a formal method and discusses these issues based on our experience with CD. Also described are some unexpected results of our case study.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117338900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809495
Alec Yasinsac, W. Wulf
Tools to evaluate Cryptographic Protocols (CPs) exploded into the literature after development of BAN Logic. Many of these were created to repair weaknesses in BAN Logic. Unfortunately, these tools are all complex and difficult to implement individually, with little or no effort available to implement multiple tools in a workbench environment. We propose a framework that allows a protocol analyst to exercise multiple CP evaluation tools in a single environment. Moreover, this environment exhibits characteristics that will enhance the effectiveness of the CP evaluation methods themselves.
{"title":"A framework for a cryptographic protocol evaluation workbench","authors":"Alec Yasinsac, W. Wulf","doi":"10.1109/HASE.1999.809495","DOIUrl":"https://doi.org/10.1109/HASE.1999.809495","url":null,"abstract":"Tools to evaluate Cryptographic Protocols (CPs) exploded into the literature after development of BAN Logic. Many of these were created to repair weaknesses in BAN Logic. Unfortunately, these tools are all complex and difficult to implement individually, with little or no effort available to implement multiple tools in a workbench environment. We propose a framework that allows a protocol analyst to exercise multiple CP evaluation tools in a single environment. Moreover, this environment exhibits characteristics that will enhance the effectiveness of the CP evaluation methods themselves.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131417524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809497
M. Whalen, M. Heimdahl
Although formal requirements specifications can provide a complete and consistent description of a safety-critical software system, designing and developing production quality code from high-level specifications can be a time-consuming and error-prone process. Automated translation, or code generation, of the specification to production code can alleviate many of the problems associated with design and implementation. However, current approaches have been unsuitable for safety-critical environments because they employ complex and/or ad-hoc methods for translation. In this paper we discuss the issues involved in automatic code generation for high-assurance systems and define a set of requirements that code generators for this domain must satisfy. These requirements cover the formality of the translation, the quality of the code generator, and the properties of the generated code.
{"title":"On the requirements of high-integrity code generation","authors":"M. Whalen, M. Heimdahl","doi":"10.1109/HASE.1999.809497","DOIUrl":"https://doi.org/10.1109/HASE.1999.809497","url":null,"abstract":"Although formal requirements specifications can provide a complete and consistent description of a safety-critical software system, designing and developing production quality code from high-level specifications can be a time-consuming and error-prone process. Automated translation, or code generation, of the specification to production code can alleviate many of the problems associated with design and implementation. However, current approaches have been unsuitable for safety-critical environments because they employ complex and/or ad-hoc methods for translation. In this paper we discuss the issues involved in automatic code generation for high-assurance systems and define a set of requirements that code generators for this domain must satisfy. These requirements cover the formality of the translation, the quality of the code generator, and the properties of the generated code.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131212933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809471
M. Brockmeyer
Automated model-checking of formal specifications for real-time systems has remained an elusive goal due to the state-space explosion problem. This paper describes an approach to testing formal specifications using automatically generated testing modules. This technique preserves many of the advantages of using formal specifications while mitigating the state-space explosion problem by focusing state-space exploration to a subset determined by the test. Because the testing modules are defined in the same formalism as the specification, the semantics of the test are precisely defined. Moreover, existing model-checking tools can be leveraged to perform the testing. Finally, this approach reduces evaluation of a potential complex assertion to a simple reachability condition in the tested specification's state space.
{"title":"Using Modechart modules for testing formal specifications","authors":"M. Brockmeyer","doi":"10.1109/HASE.1999.809471","DOIUrl":"https://doi.org/10.1109/HASE.1999.809471","url":null,"abstract":"Automated model-checking of formal specifications for real-time systems has remained an elusive goal due to the state-space explosion problem. This paper describes an approach to testing formal specifications using automatically generated testing modules. This technique preserves many of the advantages of using formal specifications while mitigating the state-space explosion problem by focusing state-space exploration to a subset determined by the test. Because the testing modules are defined in the same formalism as the specification, the semantics of the test are precisely defined. Moreover, existing model-checking tools can be leveraged to perform the testing. Finally, this approach reduces evaluation of a potential complex assertion to a simple reachability condition in the tested specification's state space.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130842469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809499
P. Ammann, P. Black
Software developers use a variety of methods, including both formal methods and testing, to argue that their systems are suitable components for high assurance applications. In this paper, we develop another connection between formal methods and testing by defining a specification-based coverage metric to evaluate test sets. Formal methods in the form of a model checker supply the necessary automation to make the metric practical. The metric gives the software developer assurance that a given test set is sufficiently sensitive to the structure of an application's specification. In this paper, we develop the necessary foundation for the metric and then illustrate the metric on an example.
{"title":"A specification-based coverage metric to evaluate test sets","authors":"P. Ammann, P. Black","doi":"10.1109/HASE.1999.809499","DOIUrl":"https://doi.org/10.1109/HASE.1999.809499","url":null,"abstract":"Software developers use a variety of methods, including both formal methods and testing, to argue that their systems are suitable components for high assurance applications. In this paper, we develop another connection between formal methods and testing by defining a specification-based coverage metric to evaluate test sets. Formal methods in the form of a model checker supply the necessary automation to make the metric practical. The metric gives the software developer assurance that a given test set is sufficiently sensitive to the structure of an application's specification. In this paper, we develop the necessary foundation for the metric and then illustrate the metric on an example.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126662862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809494
J. Ren, M. Cukier, P. Rubel, W. Sanders, D. Bakken, D. Karr
Building dependable distributed systems using ad hoc methods is a challenging task. Without proper support, an application programmer must face the daunting requirement of having to provide fault tolerance at the application level, in addition to dealing with the complexities of the distributed application itself. This approach requires a deep knowledge of fault tolerance on the part of the application designer, and has a high implementation cost. What is needed is a systematic approach to providing dependability to distributed applications. Proteus, part of the AQuA architecture, fills this need and provides facilities to make a standard distributed CORBA application dependable, with minimal changes to an application. Furthermore, it permits applications to specify, either directly or via the Quality Objects (QuO) infrastructure, the level of dependability they expect of a remote object, and will attempt to configure the system to achieve the requested dependability level. Our previous papers have focused on the architecture and implementation of Proteus. This paper describes how to construct dependable applications using the AQuA architecture, by describing the interface that a programmer is presented with and the graphical monitoring facilities that it provides.
{"title":"Building dependable distributed applications using AQUA","authors":"J. Ren, M. Cukier, P. Rubel, W. Sanders, D. Bakken, D. Karr","doi":"10.1109/HASE.1999.809494","DOIUrl":"https://doi.org/10.1109/HASE.1999.809494","url":null,"abstract":"Building dependable distributed systems using ad hoc methods is a challenging task. Without proper support, an application programmer must face the daunting requirement of having to provide fault tolerance at the application level, in addition to dealing with the complexities of the distributed application itself. This approach requires a deep knowledge of fault tolerance on the part of the application designer, and has a high implementation cost. What is needed is a systematic approach to providing dependability to distributed applications. Proteus, part of the AQuA architecture, fills this need and provides facilities to make a standard distributed CORBA application dependable, with minimal changes to an application. Furthermore, it permits applications to specify, either directly or via the Quality Objects (QuO) infrastructure, the level of dependability they expect of a remote object, and will attempt to configure the system to achieve the requested dependability level. Our previous papers have focused on the architecture and implementation of Proteus. This paper describes how to construct dependable applications using the AQuA architecture, by describing the interface that a programmer is presented with and the graphical monitoring facilities that it provides.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"28 2 Suppl 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123437713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809486
J. Voas
Can COTS software be tolerated in high assurance environments? Or is this hopelessly impossible? My position is that COTS software will exist in high assurance environments (in the near future) no matter what prudence or due diligence suggests. Prudence and due diligence would argue that it is foolish to expect dependable functionality from generic products that are mass produced, engineered for the typical user (who can tolerate failures because they are mere nuisances), suffer from shrunken development and testing schedules, and carry shrink wrap disclaimers. Prudence and due diligence would ask why we opt to use COTS software when we cannot even reach our high dependability goals via code that is written from scratch and according to standards that are known to improve dependability. After all, the COTS vendors do not follow these standards. Is it reasonable to expect software that is intended for the mass market to be highly dependable? Probably not.
{"title":"COTS and high assurance: an oxymoron?","authors":"J. Voas","doi":"10.1109/HASE.1999.809486","DOIUrl":"https://doi.org/10.1109/HASE.1999.809486","url":null,"abstract":"Can COTS software be tolerated in high assurance environments? Or is this hopelessly impossible? My position is that COTS software will exist in high assurance environments (in the near future) no matter what prudence or due diligence suggests. Prudence and due diligence would argue that it is foolish to expect dependable functionality from generic products that are mass produced, engineered for the typical user (who can tolerate failures because they are mere nuisances), suffer from shrunken development and testing schedules, and carry shrink wrap disclaimers. Prudence and due diligence would ask why we opt to use COTS software when we cannot even reach our high dependability goals via code that is written from scratch and according to standards that are known to improve dependability. After all, the COTS vendors do not follow these standards. Is it reasonable to expect software that is intended for the mass market to be highly dependable? Probably not.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115913074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}