Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809475
William E. McUmber, B. Cheng
Methods for developing and modeling embedded systems and rigorously verifying behavior before committing to code are increasingly important. A number of object-oriented techniques and notations have been introduced but recently, it appears that the Unified Modeling Language (UML) could be a notation broad enough in scope to represent a variety of domains and gain widespread use. Currently, however, UML is only a notation, with no formal semantics attached to the individual diagrams. In order to address this problem, we have developed a framework for deriving VHDL specifications from the class and state diagrams in order to capture the structure and the behavior of embedded systems. The derived VHDL specifications enable us to perform behavior simulation of the UML models.
{"title":"UML-based analysis of embedded systems using a mapping to VHDL","authors":"William E. McUmber, B. Cheng","doi":"10.1109/HASE.1999.809475","DOIUrl":"https://doi.org/10.1109/HASE.1999.809475","url":null,"abstract":"Methods for developing and modeling embedded systems and rigorously verifying behavior before committing to code are increasingly important. A number of object-oriented techniques and notations have been introduced but recently, it appears that the Unified Modeling Language (UML) could be a notation broad enough in scope to represent a variety of domains and gain widespread use. Currently, however, UML is only a notation, with no formal semantics attached to the individual diagrams. In order to address this problem, we have developed a framework for deriving VHDL specifications from the class and state diagrams in order to capture the structure and the behavior of embedded systems. The derived VHDL specifications enable us to perform behavior simulation of the UML models.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114547008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809488
Barbara J. Czerny, M. Heimdahl
Statically analyzing requirements specifications to assure that they possess desirable properties is a useful activity in any rigorous software development project. The analysis is performed on an abstraction of the original requirements specification. The abstractions in the model may lead to spurious errors in the analysis output. Spurious errors are errors that are reported to occur under certain conditions, but information abstracted from the model precludes the conditions from being satisfied in the original model. A high ratio of spurious errors to true errors in the analysis output makes it difficult, error-prone, and time consuming to find and correct the true errors. In this paper we describe a technique that uses binary decision diagrams to help the analyst identify the abstractions that are lending to excessive spurious errors in the analysis output. Information about these abstractions can then be incorporated into the analysis to eliminate the corresponding spurious error reports.
{"title":"Identifying domain axioms using binary decision diagrams","authors":"Barbara J. Czerny, M. Heimdahl","doi":"10.1109/HASE.1999.809488","DOIUrl":"https://doi.org/10.1109/HASE.1999.809488","url":null,"abstract":"Statically analyzing requirements specifications to assure that they possess desirable properties is a useful activity in any rigorous software development project. The analysis is performed on an abstraction of the original requirements specification. The abstractions in the model may lead to spurious errors in the analysis output. Spurious errors are errors that are reported to occur under certain conditions, but information abstracted from the model precludes the conditions from being satisfied in the original model. A high ratio of spurious errors to true errors in the analysis output makes it difficult, error-prone, and time consuming to find and correct the true errors. In this paper we describe a technique that uses binary decision diagrams to help the analyst identify the abstractions that are lending to excessive spurious errors in the analysis output. Information about these abstractions can then be incorporated into the analysis to eliminate the corresponding spurious error reports.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129329409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809472
W. Tsai, W. Shao, Sanjai Rayadurgam, Jinbao Li, R. Paul
Describes assurance techniques for Year-2000 (Y2K) testing. The Y2K problem is an important issue in the computer industry today, and testing is still the main technique for quality assurance. There is a need to ensure that the software is reasonably safe from Y2K faults after testing. This paper uses a statistical model for ensuring this, and it explicitly models Y2K faults as well as the ripples induced by Y2K modifications. The paper then describes two processes that use the model in practice: a bottom-up process that can be used together with software development, and a top-down process that can be used when the project is almost completed. These processes can be easily embedded in an existing testing process with minimal changes and minimal extra effort.
{"title":"Assurance-based Y2K testing","authors":"W. Tsai, W. Shao, Sanjai Rayadurgam, Jinbao Li, R. Paul","doi":"10.1109/HASE.1999.809472","DOIUrl":"https://doi.org/10.1109/HASE.1999.809472","url":null,"abstract":"Describes assurance techniques for Year-2000 (Y2K) testing. The Y2K problem is an important issue in the computer industry today, and testing is still the main technique for quality assurance. There is a need to ensure that the software is reasonably safe from Y2K faults after testing. This paper uses a statistical model for ensuring this, and it explicitly models Y2K faults as well as the ripples induced by Y2K modifications. The paper then describes two processes that use the model in practice: a bottom-up process that can be used together with software development, and a top-down process that can be used when the project is almost completed. These processes can be easily embedded in an existing testing process with minimal changes and minimal extra effort.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125351326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809470
A. Gates, P. Teller
Computers are omnipresent in our society, creating a reliance that demands high-assurance systems. Traditional verification and validation approaches may not be sufficient to identify the existence of software faults. Dynamic Monitoring with Integrity Constraints (DynaMICs) augments existing approaches by including: (1) elicitation of constraints from domain experts and developers that capture knowledge about real-world objects, assumptions and limitations, (2) constraints stored and maintained separate from the program, (3) automatic generation of monitoring code and program instrumentation, (4) performance-friendly monitoring, and (5) tracing among specifications, code and documentation. The primary motivation for DynaMICs is to facilitate the detection of faults, in particular those that result from insufficient communication, changes in intended software use and errors introduced through external interfaces. After presenting related work and an overview of DynaMICs, this paper outlines the methodology used to provide an automated and independent software-fault detection system.
{"title":"DynaMICs: an automated and independent software-fault detection approach","authors":"A. Gates, P. Teller","doi":"10.1109/HASE.1999.809470","DOIUrl":"https://doi.org/10.1109/HASE.1999.809470","url":null,"abstract":"Computers are omnipresent in our society, creating a reliance that demands high-assurance systems. Traditional verification and validation approaches may not be sufficient to identify the existence of software faults. Dynamic Monitoring with Integrity Constraints (DynaMICs) augments existing approaches by including: (1) elicitation of constraints from domain experts and developers that capture knowledge about real-world objects, assumptions and limitations, (2) constraints stored and maintained separate from the program, (3) automatic generation of monitoring code and program instrumentation, (4) performance-friendly monitoring, and (5) tracing among specifications, code and documentation. The primary motivation for DynaMICs is to facilitate the detection of faults, in particular those that result from insufficient communication, changes in intended software use and errors introduced through external interfaces. After presenting related work and an overview of DynaMICs, this paper outlines the methodology used to provide an automated and independent software-fault detection system.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126087849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809480
A. Tai, S. Chau, L. Alkalai
Among the COTS applications in the X2000 architecture for deep-space missions, the use of commercial bus standards is the highest-payoff COTS application since a bus interface has a global impact and enabling effect on system cost and capability, respectively. While COTS bus standards enable significant cost reductions, it is a great challenge for us to deliver a highly-reliable long-term survivable system employing COTS standards that are not developed for mission-critical applications. The spirit of our solution to the problem is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper we discuss our experiences and findings on the design and assessment of an IEEE 1394 compliant fault-tolerant bus architecture. We first derive and qualitatively analyze a "stack-tree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.
{"title":"COTS-based fault tolerance in deep space: Qualitative and quantitative analyses of a bus network architecture","authors":"A. Tai, S. Chau, L. Alkalai","doi":"10.1109/HASE.1999.809480","DOIUrl":"https://doi.org/10.1109/HASE.1999.809480","url":null,"abstract":"Among the COTS applications in the X2000 architecture for deep-space missions, the use of commercial bus standards is the highest-payoff COTS application since a bus interface has a global impact and enabling effect on system cost and capability, respectively. While COTS bus standards enable significant cost reductions, it is a great challenge for us to deliver a highly-reliable long-term survivable system employing COTS standards that are not developed for mission-critical applications. The spirit of our solution to the problem is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper we discuss our experiences and findings on the design and assessment of an IEEE 1394 compliant fault-tolerant bus architecture. We first derive and qualitatively analyze a \"stack-tree topology\" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116548594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809487
D. Wallace, D. R. Kuhn
Most complex systems today contain software, and systems failures activated by software faults can provide lessons for software development practices and software quality assurance. This paper presents an analysis of software-related failures of medical devices that caused no death or injury but led to recalls by the manufacturers. The analysis categorizes the failures by their symptoms and faults, and discusses methods of preventing and detecting faults in each category. The nature of the faults provides lessons about the value of generally accepted quality practices for prevention and detection methods applied prior to system release. It also provides some insight into the need for formal requirements specification and for improved testing of complex hardware-software systems.
{"title":"Lessons from 342 medical device failures","authors":"D. Wallace, D. R. Kuhn","doi":"10.1109/HASE.1999.809487","DOIUrl":"https://doi.org/10.1109/HASE.1999.809487","url":null,"abstract":"Most complex systems today contain software, and systems failures activated by software faults can provide lessons for software development practices and software quality assurance. This paper presents an analysis of software-related failures of medical devices that caused no death or injury but led to recalls by the manufacturers. The analysis categorizes the failures by their symptoms and faults, and discusses methods of preventing and detecting faults in each category. The nature of the faults provides lessons about the value of generally accepted quality practices for prevention and detection methods applied prior to system release. It also provides some insight into the need for formal requirements specification and for improved testing of complex hardware-software systems.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127173195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809481
T. Khoshgoftaar, E. B. Allen
Embedded-computer systems have become essential elements of the modern world. For example, telecommunications systems are the backbone of society's information infrastructure. Embedded systems must have highly reliable software. The consequences of failures may be severe; down-time may not be tolerable; and repairs in remote locations are often expensive. Moreover, today's fast-moving technology marketplace mandates that embedded systems evolve, resulting in multiple software releases embedded in multiple products. Software quality models can be valuable tools for software engineering of embedded systems, because some software-enhancement techniques are so expensive or time-consuming that it is not practical to apply them to all modules. Targeting such enhancement techniques is an effective way to reduce the likelihood of faults discovered in the field. Research has shown software metrics to be useful predictors of software faults. A software quality model is developed using measurements and fault data from a past release. The calibrated model is then applied to modules currently under development. Such models yield predictions on a module-by-module basis. This paper examines the Classification And Regression Trees (CART) algorithm for predicting which software modules have high risk of faults to be discovered during operations. CART is attractive because it emphasizes pruning to achieve robust models. This paper presents details on the CART algorithm in the context of software engineering of embedded systems. We illustrate this approach with a case study of four consecutive releases of software embedded in a large telecommunications system. The level of accuracy achieved in the case study would be useful to developers of an embedded system. The case study indicated that this model would continue to be useful over several releases as the system evolves.
{"title":"Predicting fault-prone software modules in embedded systems with classification trees","authors":"T. Khoshgoftaar, E. B. Allen","doi":"10.1109/HASE.1999.809481","DOIUrl":"https://doi.org/10.1109/HASE.1999.809481","url":null,"abstract":"Embedded-computer systems have become essential elements of the modern world. For example, telecommunications systems are the backbone of society's information infrastructure. Embedded systems must have highly reliable software. The consequences of failures may be severe; down-time may not be tolerable; and repairs in remote locations are often expensive. Moreover, today's fast-moving technology marketplace mandates that embedded systems evolve, resulting in multiple software releases embedded in multiple products. Software quality models can be valuable tools for software engineering of embedded systems, because some software-enhancement techniques are so expensive or time-consuming that it is not practical to apply them to all modules. Targeting such enhancement techniques is an effective way to reduce the likelihood of faults discovered in the field. Research has shown software metrics to be useful predictors of software faults. A software quality model is developed using measurements and fault data from a past release. The calibrated model is then applied to modules currently under development. Such models yield predictions on a module-by-module basis. This paper examines the Classification And Regression Trees (CART) algorithm for predicting which software modules have high risk of faults to be discovered during operations. CART is attractive because it emphasizes pruning to achieve robust models. This paper presents details on the CART algorithm in the context of software engineering of embedded systems. We illustrate this approach with a case study of four consecutive releases of software embedded in a large telecommunications system. The level of accuracy achieved in the case study would be useful to developers of an embedded system. The case study indicated that this model would continue to be useful over several releases as the system evolves.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122742822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809482
R. Paul, A. Tai
The implementation of COTS-based high assurance is becoming a major challenge today when cost concern has led to increased use of COTS products for critical applications. On the other hand, vendors remain reluctant to incorporate fault tolerance features into COTS products because doing so is likely to increase development and production costs and thus weaken the market competitiveness of their products. Therefore, it is crucial for us to cope with the current state of COTS.This panel brings together the researchers and practitioners with expertise, experiences and insights on using COTS components to build high-assurance systems. The purpose of the panel is to foster debating, exchanging and integrating opinions, ideas and solutions from various perspective (e.g., COTS software versus COTS hardware, COTS for long-life deep-space systems versus COTS for highly-available communication applications). We specially solicitate different opinions on the following issues: Whether can we build high-assurance systems using COTS components? Why is it inappropriate or impossible to build high-assurance systems using COTS components? (If the answer to the first question is "No.") Why is it possible to use COTS components that are not designed for critical applications to build high-assurance systems? (If the answer to the first question is "Yes.") When (that is, under which circumstances and conditions) is it appropriate to use COTS components for high-assurance systems? How do we derive solutions to mitigate the problems and inadequacies of COTS products?Among the particular questions we intend to discuss are: 1. What are the evaluation criteria and tradeoff strategies for COTS product selection for high-assurance systems?2. Is it viable to influence the vendors to provide or enhance high-assurance properties for the future versions of their COTS products? What are the strategies?3. Which will be the most practical and effective basis for us to develop methodologies that mitigate the effects of design faults and/or inadequacies of COTS software: fault predication, fault containment, or adaptive fault tolerance4. Is it possible and practical to integrate the methods for mitigating the effects of the design faults/inadequacies of COTS software and hardware in a high-assurance system? And how, if the answer is positive?
{"title":"Building high-assurance systems using COTS components: whether, why, when and how?","authors":"R. Paul, A. Tai","doi":"10.1109/HASE.1999.809482","DOIUrl":"https://doi.org/10.1109/HASE.1999.809482","url":null,"abstract":"The implementation of COTS-based high assurance is becoming a major challenge today when cost concern has led to increased use of COTS products for critical applications. On the other hand, vendors remain reluctant to incorporate fault tolerance features into COTS products because doing so is likely to increase development and production costs and thus weaken the market competitiveness of their products. Therefore, it is crucial for us to cope with the current state of COTS.This panel brings together the researchers and practitioners with expertise, experiences and insights on using COTS components to build high-assurance systems. The purpose of the panel is to foster debating, exchanging and integrating opinions, ideas and solutions from various perspective (e.g., COTS software versus COTS hardware, COTS for long-life deep-space systems versus COTS for highly-available communication applications). We specially solicitate different opinions on the following issues: Whether can we build high-assurance systems using COTS components? Why is it inappropriate or impossible to build high-assurance systems using COTS components? (If the answer to the first question is \"No.\") Why is it possible to use COTS components that are not designed for critical applications to build high-assurance systems? (If the answer to the first question is \"Yes.\") When (that is, under which circumstances and conditions) is it appropriate to use COTS components for high-assurance systems? How do we derive solutions to mitigate the problems and inadequacies of COTS products?Among the particular questions we intend to discuss are: 1. What are the evaluation criteria and tradeoff strategies for COTS product selection for high-assurance systems?2. Is it viable to influence the vendors to provide or enhance high-assurance properties for the future versions of their COTS products? What are the strategies?3. Which will be the most practical and effective basis for us to develop methodologies that mitigate the effects of design faults and/or inadequacies of COTS software: fault predication, fault containment, or adaptive fault tolerance4. Is it possible and practical to integrate the methods for mitigating the effects of the design faults/inadequacies of COTS software and hardware in a high-assurance system? And how, if the answer is positive?","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115230000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809484
I. Levendel
Two problems have a significant economic impact on high assurance engineering of systems. The first problem originates from a frequent lack of discipline in the design of dependable systems, which is often exhibited by a weak or non-existent staffing exclusively dedicated to the design and implementation of a cohesive error and failure management strategy. This, in turn, results in excessive field costs for both defect repairs and system maintenance staffing.The second problem is due to the fact that traditional dependable system designs are very expensive in terms of cost of goods, because they rely heavily on proprietary hardware and software. In fact, the implementation of dependability may increase system costs by several orders of magnitude. This is why the usage of COTS appears attractive from a simple-minded viewpoint. For instance, the reality of competition in the more open wireless market has done more for component reuse that any other factor. However, the urgent need for lower cost of goods combined with of the aforementioned first problem (frequent lack of discipline in design for dependability) have led to lower service quality. Conversely, developing a discipline for dependable system design will be a necessary enabler of the use of COTS.In spite of some differences which are explained next, designing dependable systems using COTS requires the same fundamental principles as designing traditional dependable systems. First, errors and malfunctions need to be detected and located. To that effect, reusable components need to be diagnosable, namely their interfaces need to provide information about the eventual occurrence of errors and malfunctions (component observability). In addition, if the functioning of a failing component cannot be corrected, the component must be able to fail in a way that allows its real time replacement by another equivalent component (component controlability). There is also a need to design and implement, in the application software, mechanisms to manage system reconfiguration without notable service interruptions. Although these fundamental design principles are fundamental, COTS designs must emphasize clear component boundary design constraints for dependability, whereas in traditional designs boundaries are not as critical.In summary, component observability and controlability, and well-organized recovery strategies are necessary complementary requirements for the dependable integration of systems using COTS. Undoubtedly, the need to reduce cost while maintaining system dependability will provide a strong incentive for the establishment of a strong design discipline and for the adaptation of COTS for dependable integration.
{"title":"HASE in wireless systems","authors":"I. Levendel","doi":"10.1109/HASE.1999.809484","DOIUrl":"https://doi.org/10.1109/HASE.1999.809484","url":null,"abstract":"Two problems have a significant economic impact on high assurance engineering of systems. The first problem originates from a frequent lack of discipline in the design of dependable systems, which is often exhibited by a weak or non-existent staffing exclusively dedicated to the design and implementation of a cohesive error and failure management strategy. This, in turn, results in excessive field costs for both defect repairs and system maintenance staffing.The second problem is due to the fact that traditional dependable system designs are very expensive in terms of cost of goods, because they rely heavily on proprietary hardware and software. In fact, the implementation of dependability may increase system costs by several orders of magnitude. This is why the usage of COTS appears attractive from a simple-minded viewpoint. For instance, the reality of competition in the more open wireless market has done more for component reuse that any other factor. However, the urgent need for lower cost of goods combined with of the aforementioned first problem (frequent lack of discipline in design for dependability) have led to lower service quality. Conversely, developing a discipline for dependable system design will be a necessary enabler of the use of COTS.In spite of some differences which are explained next, designing dependable systems using COTS requires the same fundamental principles as designing traditional dependable systems. First, errors and malfunctions need to be detected and located. To that effect, reusable components need to be diagnosable, namely their interfaces need to provide information about the eventual occurrence of errors and malfunctions (component observability). In addition, if the functioning of a failing component cannot be corrected, the component must be able to fail in a way that allows its real time replacement by another equivalent component (component controlability). There is also a need to design and implement, in the application software, mechanisms to manage system reconfiguration without notable service interruptions. Although these fundamental design principles are fundamental, COTS designs must emphasize clear component boundary design constraints for dependability, whereas in traditional designs boundaries are not as critical.In summary, component observability and controlability, and well-organized recovery strategies are necessary complementary requirements for the dependable integration of systems using COTS. Undoubtedly, the need to reduce cost while maintaining system dependability will provide a strong incentive for the establishment of a strong design discipline and for the adaptation of COTS for dependable integration.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122771709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809496
C. Hoover, Jeffery P. Hansen, P. Koopman, S. Tamboli
System resource management for high-assurance applications such as the command and control of a battle group is a complex problem. These applications often require guaranteed computing services that must satisfy both hard and soft deadlines. In addition, their resource demands can vary significantly over time with bursts of high activity amidst periods of inactivity. A traditional solution has been to dedicate resources to critical application tasks and to share resources among noncritical tasks. With the increasing complexity of high-assurance applications and the need to reduce system costs, dedicating resources is not a satisfactory solution. The Amaranth Project at Carnegie Mellon is researching and developing a framework for allocating shared resources to support multiple quality of service (QoS) dimensions and to provide probabilistic assurances of service. This paper is an overview of the Amaranth framework, the current results from applying the framework, and the future research directions for the Amaranth project.
{"title":"The Amaranth framework: Probabilistic, utility-based quality of service management for high-assurance computing","authors":"C. Hoover, Jeffery P. Hansen, P. Koopman, S. Tamboli","doi":"10.1109/HASE.1999.809496","DOIUrl":"https://doi.org/10.1109/HASE.1999.809496","url":null,"abstract":"System resource management for high-assurance applications such as the command and control of a battle group is a complex problem. These applications often require guaranteed computing services that must satisfy both hard and soft deadlines. In addition, their resource demands can vary significantly over time with bursts of high activity amidst periods of inactivity. A traditional solution has been to dedicate resources to critical application tasks and to share resources among noncritical tasks. With the increasing complexity of high-assurance applications and the need to reduce system costs, dedicating resources is not a satisfactory solution. The Amaranth Project at Carnegie Mellon is researching and developing a framework for allocating shared resources to support multiple quality of service (QoS) dimensions and to provide probabilistic assurances of service. This paper is an overview of the Amaranth framework, the current results from applying the framework, and the future research directions for the Amaranth project.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130121433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}