Three important factors in dependable computing are cost, error correction and high availability. In this paper we will focus on assessing a proposed model that encapsulates all three important factors and a virtual architecture that can be implemented in the IaaS layer of cloud computing. The proposed model will be assessed against a popular existing architecture (Triple Modular Redundant System TMR) and the availability analysis done with Fault-Trees combined with Markov Chains. These experiments will demonstrate that the virtualization of the TMR system using the architecture that we have proposed, will achieve almost the same level of availability/reliability and cost, along with the inherent advantages of virtual systems. Advantages include faster system restart, efficient use of resources and migration.
{"title":"An Availability Model of a Virtual TMR System with Applications in Cloud/Cluster Computing","authors":"Ricardo Paharsingh, O. Das","doi":"10.1109/HASE.2011.11","DOIUrl":"https://doi.org/10.1109/HASE.2011.11","url":null,"abstract":"Three important factors in dependable computing are cost, error correction and high availability. In this paper we will focus on assessing a proposed model that encapsulates all three important factors and a virtual architecture that can be implemented in the IaaS layer of cloud computing. The proposed model will be assessed against a popular existing architecture (Triple Modular Redundant System TMR) and the availability analysis done with Fault-Trees combined with Markov Chains. These experiments will demonstrate that the virtualization of the TMR system using the architecture that we have proposed, will achieve almost the same level of availability/reliability and cost, along with the inherent advantages of virtual systems. Advantages include faster system restart, efficient use of resources and migration.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121776844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-assurance computer systems fulfill security, safety, fault-tolerant, and real-time properties. Analysis of these properties is typically performed in isolation. An integrated analysis of all the properties is a challenge that can be addressed by expressing these properties in a common integrated framework. The Unified Modeling Language is a standard modeling language which exhibits such a capability. In this paper we focus on using the Unified Modeling Language to analyze the safety properties of high-assurance systems. In particular we are interested in the study of software faults propagation and their functional level effects. In previous work we have developed the Failure Propagation and Simulation Approach to study whether a particular fault will propagate through the design and cause system-level functional failures. Mapping between different Unified Modeling Language diagrams is the central concept behind the approach. This paper briefly introduces the Failure Propagation and Simulation Approach and presents in detail the executable models developed to automate the simulation process. These executable models are built using the notations of the Event Sequence Diagram, one of the established reliability and safety analysis techniques for sequence progression.
{"title":"An Early Design Stage UML-Based Safety Analysis Approach for High Assurrance Software Systems","authors":"Chetan Mutha, C. Smidts","doi":"10.1109/HASE.2011.37","DOIUrl":"https://doi.org/10.1109/HASE.2011.37","url":null,"abstract":"High-assurance computer systems fulfill security, safety, fault-tolerant, and real-time properties. Analysis of these properties is typically performed in isolation. An integrated analysis of all the properties is a challenge that can be addressed by expressing these properties in a common integrated framework. The Unified Modeling Language is a standard modeling language which exhibits such a capability. In this paper we focus on using the Unified Modeling Language to analyze the safety properties of high-assurance systems. In particular we are interested in the study of software faults propagation and their functional level effects. In previous work we have developed the Failure Propagation and Simulation Approach to study whether a particular fault will propagate through the design and cause system-level functional failures. Mapping between different Unified Modeling Language diagrams is the central concept behind the approach. This paper briefly introduces the Failure Propagation and Simulation Approach and presents in detail the executable models developed to automate the simulation process. These executable models are built using the notations of the Event Sequence Diagram, one of the established reliability and safety analysis techniques for sequence progression.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125496950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Day-by-day managers charged with the development of complex embedded systems struggle with the evolving quality and productivity of software. Measurement and reporting of key software project metrics helps these managers visualize software development performance but oftentimes the data and subsequent analyses needed to make decisions is limited at best. Further, the software data needed from multiple software projects across the organization necessary to derive, plan, and implement longer-term strategic and tactical plans for the software organization is difficult to aggregate, organize, and report. This paper provides a way, using project metrics and data envelopment analysis, for a software organization to perform a comparative analysis of software projects, identify strengths and weaknesses of each given a specific software production efficiency model, and identify best practices that should be brought forward within the organization for further study and application on future software projects. Using this technique, a company developing product software can reliably audit and systematically adjust their business processes to continually improve and keep competitive their 'business of software.'
{"title":"Benchmarking Embedded Software Development Project Performance","authors":"Michael F. Siok, J. Tian","doi":"10.1109/HASE.2011.59","DOIUrl":"https://doi.org/10.1109/HASE.2011.59","url":null,"abstract":"Day-by-day managers charged with the development of complex embedded systems struggle with the evolving quality and productivity of software. Measurement and reporting of key software project metrics helps these managers visualize software development performance but oftentimes the data and subsequent analyses needed to make decisions is limited at best. Further, the software data needed from multiple software projects across the organization necessary to derive, plan, and implement longer-term strategic and tactical plans for the software organization is difficult to aggregate, organize, and report. This paper provides a way, using project metrics and data envelopment analysis, for a software organization to perform a comparative analysis of software projects, identify strengths and weaknesses of each given a specific software production efficiency model, and identify best practices that should be brought forward within the organization for further study and application on future software projects. Using this technique, a company developing product software can reliably audit and systematically adjust their business processes to continually improve and keep competitive their 'business of software.'","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132808777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Turkish Airlines Flight 1951 crashed short of its destination runway at Schiphol Airport, Amsterdam, Netherlands on February 25, 2009. Nine people lost their lives, 177 were injured, and the aircraft was a complete loss. There was an equipment failure in the left radio altimeter that caused the auto-throttle system to go into retard flare mode in anticipation of immediate landing when the aircraft was still near 2000 ft above terrain, There were indications and warnings of this condition to the crew but they were ignored. The throttle retardation was also temporarily masked by the aircraft being directed to intercept the localizer from above, a highly unusual procedure. The investigation found numerous instances of low altitude readings on the accident aircraft as well as on others. Also, the accident aircraft had experienced two instances of throttle retardation on recent flights. Poor reporting practices led the manufacturer and the certifying authorities to underestimate the prevalence of this failure pattern. It is concluded that in many instances actions and design decisions were based on the assumption further conditions will be within the normal envelope. This is a dangerous assumption that must be avoided if we want to maintain the fine safety record of commercial aviation.
{"title":"So Much to Learn from One Accident Crash of 737 on 25 February 2009","authors":"H. Hecht","doi":"10.1109/HASE.2011.45","DOIUrl":"https://doi.org/10.1109/HASE.2011.45","url":null,"abstract":"Turkish Airlines Flight 1951 crashed short of its destination runway at Schiphol Airport, Amsterdam, Netherlands on February 25, 2009. Nine people lost their lives, 177 were injured, and the aircraft was a complete loss. There was an equipment failure in the left radio altimeter that caused the auto-throttle system to go into retard flare mode in anticipation of immediate landing when the aircraft was still near 2000 ft above terrain, There were indications and warnings of this condition to the crew but they were ignored. The throttle retardation was also temporarily masked by the aircraft being directed to intercept the localizer from above, a highly unusual procedure. The investigation found numerous instances of low altitude readings on the accident aircraft as well as on others. Also, the accident aircraft had experienced two instances of throttle retardation on recent flights. Poor reporting practices led the manufacturer and the certifying authorities to underestimate the prevalence of this failure pattern. It is concluded that in many instances actions and design decisions were based on the assumption further conditions will be within the normal envelope. This is a dangerous assumption that must be avoided if we want to maintain the fine safety record of commercial aviation.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"91 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132896820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingying Zhang, Emmanuel Rodriguez, Hao Zheng, C. Myers
Partial order reduction is essential to address state explosion when verifying concurrent systems by reducing states irrelevant to the verification results. However, traditional static approaches by analyzing system model structures often do not work well. To address such problem, this paper presents a new behavioral analysis approach where a compositional reach ability analysis method is used to generate the over-approximate state spaces for all modules in a system, and then the independent transitions necessary for the partial order reduction are computed by examining these state spaces. Compared to the static analysis approaches, the independent transitions computed are more refined and accurate. The experimental results on some examples show that the presented approach is promising.
{"title":"A Behavioral Analysis Approach for Efficient Partial Order Reduction","authors":"Yingying Zhang, Emmanuel Rodriguez, Hao Zheng, C. Myers","doi":"10.1109/HASE.2011.15","DOIUrl":"https://doi.org/10.1109/HASE.2011.15","url":null,"abstract":"Partial order reduction is essential to address state explosion when verifying concurrent systems by reducing states irrelevant to the verification results. However, traditional static approaches by analyzing system model structures often do not work well. To address such problem, this paper presents a new behavioral analysis approach where a compositional reach ability analysis method is used to generate the over-approximate state spaces for all modules in a system, and then the independent transitions necessary for the partial order reduction are computed by examining these state spaces. Compared to the static analysis approaches, the independent transitions computed are more refined and accurate. The experimental results on some examples show that the presented approach is promising.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131327300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a Smart Vibration Monitoring System (SVMS) developed as an effective way to reduce equipment losses and enhance safety, efficiency, reliability, availability and long life time duration of an ocean turbine. The system utilizes advanced signal processing and analysis techniques to evaluate the health of a machine and identify incipient anomalies (faults) and evaluate their severity relative to the machine's condition. The existing system and planned improvements are described and discussed. The primary function of the SVMS is an automatic machinery fault detection and diagnosis based on real time processing and analysis of vibration data. The SVMS basically performs the same functions as a vibration analyst would for post processing of off-line data. The SVMS automatically sends a warning message to a cell phone and to an email address as soon as it detects a fault that is developing within the machine. The message will contain a generic identification of the fault.
{"title":"Smart Vibration Monitoring System for an Ocean Turbine","authors":"Mustapha Mjit, P. Beaujean, D. Vendittis","doi":"10.1109/HASE.2011.34","DOIUrl":"https://doi.org/10.1109/HASE.2011.34","url":null,"abstract":"This paper describes a Smart Vibration Monitoring System (SVMS) developed as an effective way to reduce equipment losses and enhance safety, efficiency, reliability, availability and long life time duration of an ocean turbine. The system utilizes advanced signal processing and analysis techniques to evaluate the health of a machine and identify incipient anomalies (faults) and evaluate their severity relative to the machine's condition. The existing system and planned improvements are described and discussed. The primary function of the SVMS is an automatic machinery fault detection and diagnosis based on real time processing and analysis of vibration data. The SVMS basically performs the same functions as a vibration analyst would for post processing of off-line data. The SVMS automatically sends a warning message to a cell phone and to an email address as soon as it detects a fault that is developing within the machine. The message will contain a generic identification of the fault.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121742337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years there has been great interest in implementing object recognition frame work on mobile phones. This has stemmed from the fact the advances in object recognition algorithm and mobile phone capabilities have built a congenial ecosystem. Application developers on mobile platforms are trying to utilize the object recognition technology to build better human computer interfaces. This approach is in the nascent phase and proper application framework is required. In this paper, we propose a framework to overcome design challenges and provide an evaluation methodology to assess the system performance. We use the emerging Android mobile platform to implement and test the framework. We performed a case study using the proposal and reported the test result. This assessment will help developers make wise decisions about their application design. Furthermore, the Android API developers could use this information to provide better interfaces to the third party developers. The design and evaluation methodology could be extended to other mobile platforms for a wider consumer base.
{"title":"Validation of Object Recognition Framework on Android Mobile Platform","authors":"V. Tyagi, A. Pandya, Ankur Agarwal, B. Alhalabi","doi":"10.1109/HASE.2011.62","DOIUrl":"https://doi.org/10.1109/HASE.2011.62","url":null,"abstract":"In recent years there has been great interest in implementing object recognition frame work on mobile phones. This has stemmed from the fact the advances in object recognition algorithm and mobile phone capabilities have built a congenial ecosystem. Application developers on mobile platforms are trying to utilize the object recognition technology to build better human computer interfaces. This approach is in the nascent phase and proper application framework is required. In this paper, we propose a framework to overcome design challenges and provide an evaluation methodology to assess the system performance. We use the emerging Android mobile platform to implement and test the framework. We performed a case study using the proposal and reported the test result. This assessment will help developers make wise decisions about their application design. Furthermore, the Android API developers could use this information to provide better interfaces to the third party developers. The design and evaluation methodology could be extended to other mobile platforms for a wider consumer base.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130304614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With more and more personal data being collected and stored by service providers, there is an increasing need to ensure that their usage is compliant with privacy regulations. We consider the specific scenario where policies are defined in metric temporal logic and audited against the database usage logs. Previous works have shown that this can indeed be achieved in an efficient manner for a very expressive set of policies. One of the main ingredients of such an auditing process is the availability of sufficient database logs. Currently, it is a manual process to first determine the logs needed, and then come up with the necessary auditing specifications to generate them. This is not only a time consuming process but can be erroneous as well, leading to either insufficient or redundant logging. Logging in general is costly as it is an overhead on the real-time database performance, and hence redundant logging is not an alternative either. Our contribution in this work is to streamline the log generation process by deriving the auditing specifications directly from the policies to be audited. We also show how the required logging can be minimized based on the temporal constraints specified in the policies. Given privacy policies as input, the output of the proposed tool is the corresponding auditing specifications that can be installed directly in the databases, to produce logs that are both minimal and sufficient to audit the given policies. The tool has been implemented and tested in a real-life scenario.
{"title":"Transforming Privacy Policies to Auditing Specifications","authors":"Debmalya Biswas, Valtteri Niemi","doi":"10.1109/HASE.2011.51","DOIUrl":"https://doi.org/10.1109/HASE.2011.51","url":null,"abstract":"With more and more personal data being collected and stored by service providers, there is an increasing need to ensure that their usage is compliant with privacy regulations. We consider the specific scenario where policies are defined in metric temporal logic and audited against the database usage logs. Previous works have shown that this can indeed be achieved in an efficient manner for a very expressive set of policies. One of the main ingredients of such an auditing process is the availability of sufficient database logs. Currently, it is a manual process to first determine the logs needed, and then come up with the necessary auditing specifications to generate them. This is not only a time consuming process but can be erroneous as well, leading to either insufficient or redundant logging. Logging in general is costly as it is an overhead on the real-time database performance, and hence redundant logging is not an alternative either. Our contribution in this work is to streamline the log generation process by deriving the auditing specifications directly from the policies to be audited. We also show how the required logging can be minimized based on the temporal constraints specified in the policies. Given privacy policies as input, the output of the proposed tool is the corresponding auditing specifications that can be installed directly in the databases, to produce logs that are both minimal and sufficient to audit the given policies. The tool has been implemented and tested in a real-life scenario.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129913427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web services have became more and more important in these years, and BPEL4WS (BPEL) is a de facto standard for the web service composition and orchestration. It contains several distinct features, including the scope-based compensation and fault handling mechanism. We have already explored the operational semantics and denotational semantics for BPEL, where a set of algebraic laws can be achieved via these two models respectively. Meanwhile, we have also explored the link between the operational semantics and algebraic semantics for BPEL. Our approach was to derive the operational semantics from algebraic semantics. This paper considers the animation approach for the link between operational semantics and algebraic semantics for BPEL. The Logic Programming Language Prolog is applied to support for the development. Firstly we animate the operational semantics for BPEL. Our approach for deriving operational semantics from algebraic semantics proceeds through head normal form. Secondly, we animate the algebraic laws for BPEL. Based on this, we animate the generation of head normal form for each program. Four typical forms are introduced for defining head normal form. Thirdly, we explore the animation for deriving operational semantics from head normal form. From various test results, the first and third exploration show that the soundness and completeness for the operational semantics from the algebraic semantics for BPEL.
{"title":"Animating the Approach of Deriving Operational Semantics from Algebraic Semantics for Web Services","authors":"Qian Wang, Huibiao Zhu","doi":"10.1109/HASE.2011.56","DOIUrl":"https://doi.org/10.1109/HASE.2011.56","url":null,"abstract":"Web services have became more and more important in these years, and BPEL4WS (BPEL) is a de facto standard for the web service composition and orchestration. It contains several distinct features, including the scope-based compensation and fault handling mechanism. We have already explored the operational semantics and denotational semantics for BPEL, where a set of algebraic laws can be achieved via these two models respectively. Meanwhile, we have also explored the link between the operational semantics and algebraic semantics for BPEL. Our approach was to derive the operational semantics from algebraic semantics. This paper considers the animation approach for the link between operational semantics and algebraic semantics for BPEL. The Logic Programming Language Prolog is applied to support for the development. Firstly we animate the operational semantics for BPEL. Our approach for deriving operational semantics from algebraic semantics proceeds through head normal form. Secondly, we animate the algebraic laws for BPEL. Based on this, we animate the generation of head normal form for each program. Four typical forms are introduced for defining head normal form. Thirdly, we explore the animation for deriving operational semantics from head normal form. From various test results, the first and third exploration show that the soundness and completeness for the operational semantics from the algebraic semantics for BPEL.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"30 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120858822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Correlated component failures (CCF) degrade system reliability, and hence, these failures must be explicitly incorporated into the reliability analysis process. Several contemporary efforts consider CCF, however, most of these approaches introduce an exponential number of parameters and are computationally intensive because they require a complete characterization of the joint distribution of the components. As a result, these approaches are not scalable and cannot be applied to large systems. This paper presents an efficient approach to analyze system reliability considering CCF. The approach introduces only a quadratic number of parameters and is computationally efficient. The effectiveness of the approach is illustrated through a series of examples. The results indicate that the approach is both simple and efficient and can be applied to large systems.
{"title":"Efficient System Reliability with Correlated Component Failures","authors":"L. Fiondella, S. Rajasekaran, S. Gokhale","doi":"10.1109/HASE.2011.31","DOIUrl":"https://doi.org/10.1109/HASE.2011.31","url":null,"abstract":"Correlated component failures (CCF) degrade system reliability, and hence, these failures must be explicitly incorporated into the reliability analysis process. Several contemporary efforts consider CCF, however, most of these approaches introduce an exponential number of parameters and are computationally intensive because they require a complete characterization of the joint distribution of the components. As a result, these approaches are not scalable and cannot be applied to large systems. This paper presents an efficient approach to analyze system reliability considering CCF. The approach introduces only a quadratic number of parameters and is computationally efficient. The effectiveness of the approach is illustrated through a series of examples. The results indicate that the approach is both simple and efficient and can be applied to large systems.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124415289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}