Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809485
L. Sha
The vast majority of COTS software components are not developed for high reliability applications. Using them directly in embedded systems with high reliability requirements could be hazardous, as shown in the incidents of Navy ships Yorktown, Vicksburg and USS Hue City.Challenges to Fault Avoidance Approach: Ensuring the reliability of COTS software at the users' site is not an easy task. COTS software components are not subject to customers' high assurance development processes. Customers can buy source code and then subject them to a high-assurance process and make any modifications that are needed. However, this is a high cost solution. Furthermore, once a COTS software component has been modified, it is unlikely compatible with vendors' future releases. As a result, most of the benefits of using COTS are lost. Therefore this approach - making proprietary modifications to COTS components - is inconsistent with the original motivation for their use.Challenges to Fault Tolerance Approach: There are basically two fault tolerance approaches: fault masking and forward recovery. Fault masking tries to prevent incorrect outputs from being used. For example, Recovery Block attempts to check if an output is correct before using it.Unfortunately, it is often difficult to determine the correctness of a computation without knowing what the correct answer is. Forward fault recovery attempts to recover after incorrect outputs are used and it is not suitable for all the applications. Nor is there a general domain independent approach to forward fault recovery.
{"title":"Using COTS software in high assurance control applications","authors":"L. Sha","doi":"10.1109/HASE.1999.809485","DOIUrl":"https://doi.org/10.1109/HASE.1999.809485","url":null,"abstract":"The vast majority of COTS software components are not developed for high reliability applications. Using them directly in embedded systems with high reliability requirements could be hazardous, as shown in the incidents of Navy ships Yorktown, Vicksburg and USS Hue City.Challenges to Fault Avoidance Approach: Ensuring the reliability of COTS software at the users' site is not an easy task. COTS software components are not subject to customers' high assurance development processes. Customers can buy source code and then subject them to a high-assurance process and make any modifications that are needed. However, this is a high cost solution. Furthermore, once a COTS software component has been modified, it is unlikely compatible with vendors' future releases. As a result, most of the benefits of using COTS are lost. Therefore this approach - making proprietary modifications to COTS components - is inconsistent with the original motivation for their use.Challenges to Fault Tolerance Approach: There are basically two fault tolerance approaches: fault masking and forward recovery. Fault masking tries to prevent incorrect outputs from being used. For example, Recovery Block attempts to check if an output is correct before using it.Unfortunately, it is often difficult to determine the correctness of a computation without knowing what the correct answer is. Forward fault recovery attempts to recover after incorrect outputs are used and it is not suitable for all the applications. Nor is there a general domain independent approach to forward fault recovery.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114798195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809493
Haihong Zheng, S. Bhattacharya
This paper addresses the channel resource management problem for the cellular wireless network, under various traffic load conditions. Channel borrowing, a critical function in the effective management of cellular networks, is known to offer key performance benefits under hot spot situation. We propose a new channel borrowing approach, termed as LCRB (Look-ahead Channel Reservation and Borrowing)-where the key ideas include the ability to reserve a set of channels for different neighboring cells according to their traffic profile, and to conduct lookahead channel borrowing to provide channel distribution in anticipation of a forthcoming hot spot. The main benefits include the timeliness (i.e., reduced delay) of channel borrowing which can assist to meet deadlines of hard real-time message and to reduce the waiting time of uprising call. Simulation has been conducted to demonstrate the benefits.
本文研究了蜂窝无线网络在各种业务负载条件下的信道资源管理问题。信道借用是蜂窝网络有效管理的关键功能,在热点环境下具有关键的性能优势。我们提出了一种新的通道借用方法,称为LCRB (forward -ahead channel Reservation and borrowing),其关键思想包括根据不同相邻单元的流量配置为其保留一组通道的能力,并进行前瞻性通道借用,以在预期即将到来的热点时提供通道分配。其主要优点包括通道借用的及时性(即减少延迟),可以帮助满足硬实时消息的截止日期,并减少起义呼叫的等待时间。通过仿真验证了该方法的优点。
{"title":"Look-ahead channel reservation and borrowing in cellular network systems","authors":"Haihong Zheng, S. Bhattacharya","doi":"10.1109/HASE.1999.809493","DOIUrl":"https://doi.org/10.1109/HASE.1999.809493","url":null,"abstract":"This paper addresses the channel resource management problem for the cellular wireless network, under various traffic load conditions. Channel borrowing, a critical function in the effective management of cellular networks, is known to offer key performance benefits under hot spot situation. We propose a new channel borrowing approach, termed as LCRB (Look-ahead Channel Reservation and Borrowing)-where the key ideas include the ability to reserve a set of channels for different neighboring cells according to their traffic profile, and to conduct lookahead channel borrowing to provide channel distribution in anticipation of a forthcoming hot spot. The main benefits include the timeliness (i.e., reduced delay) of channel borrowing which can assist to meet deadlines of hard real-time message and to reduce the waiting time of uprising call. Simulation has been conducted to demonstrate the benefits.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"23 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115954276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809489
S. Goddard, K. Jeffay
The state of the art in verifying the real-time requirements of applications developed using general processing graph models relies on simulation or off-line scheduling. We extend the state of the art by presenting analytical methods that support the analysis of cyclic processing graphs executed with on-line schedulers. We show that it is possible to compute the latency inherent in a processing graph independent of the hardware hosting the application. We also show how to compute the real-time execution rate of each node in the graph. Using the execution rate of each node and the time it takes per execution on a given processor, the resulting CPU utilization can be computed as shown here for the Directed Low Frequency Analysis and Recording (DIFAR) acoustic signal processing application from the Airborne Low Frequency Sonar (ALFS) system of the SH-60B LAMPS MK III anti-submarine helicopter.
{"title":"Analyzing the real-time properties of a U.S. Navy signal processing system","authors":"S. Goddard, K. Jeffay","doi":"10.1109/HASE.1999.809489","DOIUrl":"https://doi.org/10.1109/HASE.1999.809489","url":null,"abstract":"The state of the art in verifying the real-time requirements of applications developed using general processing graph models relies on simulation or off-line scheduling. We extend the state of the art by presenting analytical methods that support the analysis of cyclic processing graphs executed with on-line schedulers. We show that it is possible to compute the latency inherent in a processing graph independent of the hardware hosting the application. We also show how to compute the real-time execution rate of each node in the graph. Using the execution rate of each node and the time it takes per execution on a given processor, the resulting CPU utilization can be computed as shown here for the Directed Low Frequency Analysis and Recording (DIFAR) acoustic signal processing application from the Airborne Low Frequency Sonar (ALFS) system of the SH-60B LAMPS MK III anti-submarine helicopter.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115469905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809492
I. Kaji, Yongdong Tan, K. Mori
Many companies have been forced to quickly respond to changing user needs. To provide more convenient services to consumers, the cooperation within a company or across companies have been requested. As a result, the heterogeneous information systems are also requested to be integrated together. By utilizing the heterogeneity among systems and by cooperating between systems, it becomes possible to draw out new values from them without violating each system's characteristic. In the heterogeneous systems, such as the data generation timings or frequencies are different, the data synchronization itself should be done by each system without relying on the single coordinator, because the system configurations are always changing, and members of application group which use synchronous data combination are also changing. In this paper we propose the autonomous data synchronization technique in the heterogeneous systems. The application objects which requires the synchronous combination of the data in each system can locally judge which combination of data is most adequate as the synchronous combination by comparing the synchronous information with others. Thus all systems can use the consistent combination as the synchronous data only by collecting the data locally and exchanging the synchronous information. In addition, the cost of synchronization in each system, which is measured by the synchronization waiting time (SWT) and backward data utilization (BDU) becomes even. As this proposed method is not relying on the single coordinator, it works correctly even if the system configuration is dynamically changed, or even if members of SyncApps are changed. The simulation results show the cost fairness when three systems are interconnected in straight line.
{"title":"Autonomous data synchronization in heterogeneous systems to assure the transaction","authors":"I. Kaji, Yongdong Tan, K. Mori","doi":"10.1109/HASE.1999.809492","DOIUrl":"https://doi.org/10.1109/HASE.1999.809492","url":null,"abstract":"Many companies have been forced to quickly respond to changing user needs. To provide more convenient services to consumers, the cooperation within a company or across companies have been requested. As a result, the heterogeneous information systems are also requested to be integrated together. By utilizing the heterogeneity among systems and by cooperating between systems, it becomes possible to draw out new values from them without violating each system's characteristic. In the heterogeneous systems, such as the data generation timings or frequencies are different, the data synchronization itself should be done by each system without relying on the single coordinator, because the system configurations are always changing, and members of application group which use synchronous data combination are also changing. In this paper we propose the autonomous data synchronization technique in the heterogeneous systems. The application objects which requires the synchronous combination of the data in each system can locally judge which combination of data is most adequate as the synchronous combination by comparing the synchronous information with others. Thus all systems can use the consistent combination as the synchronous data only by collecting the data locally and exchanging the synchronous information. In addition, the cost of synchronization in each system, which is measured by the synchronization waiting time (SWT) and backward data utilization (BDU) becomes even. As this proposed method is not relying on the single coordinator, it works correctly even if the system configuration is dynamically changed, or even if members of SyncApps are changed. The simulation results show the cost fairness when three systems are interconnected in straight line.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115720374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809490
E. Tronci
We present a case study on automatic synthesis of control software from formal specifications for an industrial automation control system. Our aim is to compare the effectiveness (i.e. design effort and controller quality) of automatic controller synthesis from closed loop formal specifications with that of manual controller design followed by automatic verification. The system to be controlled (plant) models a metal processing facility near Karlsruhe. We succeeded in automatically generating C code implementing a (correct by construction) embedded controller for such a plant from closed loop formal specifications. Our experimental results show that for industrial automation control systems automatic synthesis is a viable and profitable (especially as far as design effort is concerned) alternative to manual design followed by automatic verification.
{"title":"Formally modeling a metal processing plant and its closed loop specifications","authors":"E. Tronci","doi":"10.1109/HASE.1999.809490","DOIUrl":"https://doi.org/10.1109/HASE.1999.809490","url":null,"abstract":"We present a case study on automatic synthesis of control software from formal specifications for an industrial automation control system. Our aim is to compare the effectiveness (i.e. design effort and controller quality) of automatic controller synthesis from closed loop formal specifications with that of manual controller design followed by automatic verification. The system to be controlled (plant) models a metal processing facility near Karlsruhe. We succeeded in automatically generating C code implementing a (correct by construction) embedded controller for such a plant from closed loop formal specifications. Our experimental results show that for industrial automation control systems automatic synthesis is a viable and profitable (especially as far as design effort is concerned) alternative to manual design followed by automatic verification.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124015207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809479
L. Gong, Satya Dodda
With the release of Java 2 SE (Standard Edition, also commonly known as JDK 1.2), Java technology has matured such that it is increasingly deployed as part of the information infrastructure in today's economy and for mission-critical applications. These applications require a high degree of assurance of the underlying technologies, including JDK 1.2. This paper outlines the JDK 1.2 software development process and the special efforts to increase the quality assurance of the security features.
{"title":"Security assurance efforts in engineering Java 2 SE (JDK 1.2)","authors":"L. Gong, Satya Dodda","doi":"10.1109/HASE.1999.809479","DOIUrl":"https://doi.org/10.1109/HASE.1999.809479","url":null,"abstract":"With the release of Java 2 SE (Standard Edition, also commonly known as JDK 1.2), Java technology has matured such that it is increasingly deployed as part of the information infrastructure in today's economy and for mission-critical applications. These applications require a high degree of assurance of the underlying technologies, including JDK 1.2. This paper outlines the JDK 1.2 software development process and the special efforts to increase the quality assurance of the security features.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809503
J. M. Covan
Many high consequence systems today are being controlled, or are under consideration for control, by software. Such systems are called high consequence because their failure could result in large numbers of fatalities or injuries, great environmental despoilment, or complete loss of mission or business purpose.
{"title":"Why modern systems should minimize the use of safety critical software*","authors":"J. M. Covan","doi":"10.1109/HASE.1999.809503","DOIUrl":"https://doi.org/10.1109/HASE.1999.809503","url":null,"abstract":"Many high consequence systems today are being controlled, or are under consideration for control, by software. Such systems are called high consequence because their failure could result in large numbers of fatalities or injuries, great environmental despoilment, or complete loss of mission or business purpose.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114219346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809504
L. Dalton
The continuum of systems, based on consequences of failure, range from the completely benign , as in video games, to the extreme, "in the limit" case of nuclear weapons. As a function of consequences, we must apply engineering skill during all phases of system creation to 1) provide for intrinsic surety and 2) allow for systems that yield sufficiently to intellectually based analysis.
{"title":"An \"in the limit\" view*","authors":"L. Dalton","doi":"10.1109/HASE.1999.809504","DOIUrl":"https://doi.org/10.1109/HASE.1999.809504","url":null,"abstract":"The continuum of systems, based on consequences of failure, range from the completely benign , as in video games, to the extreme, \"in the limit\" case of nuclear weapons. As a function of consequences, we must apply engineering skill during all phases of system creation to 1) provide for intrinsic surety and 2) allow for systems that yield sufficiently to intellectually based analysis.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121181626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809483
S. Chau
In recent years, NASA has adopted a faster, better, and cheaper philosophy for space exploration. This philosophy mandates space missions to be accomplished with much lower cost, shorter development cycle, and more capabilities than ever. In order to meet these challenges, starting 1998, NASA's Office of Space Science has initiated the Advanced Deep Space Systems Technology Program, also known as X2000, to develop advanced technologies for future deep-space exploration missions. One of the focus technology development areas is advanced avionics, which is being developed by the Center for Integrated Space Microsystems (CISM) at the Jet Propulsion Laboratory. Under X2000 and CISM, a breakthrough multi-mission avionics system is being developed. This avionics system employs low cost hardware and software products that are widely available in the commercial market. By using COTS through out the system, we expect to significantly reduce both the development cost as well as the recurring cost of the system, and thus be able to meet the faster, better, cheaper challenges. On the other hand, COTS are not specifically developed for applications such as deep-space missions. Therefore, the real challenges are: How to select COTS technologies How to overcome their shortcomings in space applications.
{"title":"Experience of using COTS components for deep space missions","authors":"S. Chau","doi":"10.1109/HASE.1999.809483","DOIUrl":"https://doi.org/10.1109/HASE.1999.809483","url":null,"abstract":"In recent years, NASA has adopted a faster, better, and cheaper philosophy for space exploration. This philosophy mandates space missions to be accomplished with much lower cost, shorter development cycle, and more capabilities than ever. In order to meet these challenges, starting 1998, NASA's Office of Space Science has initiated the Advanced Deep Space Systems Technology Program, also known as X2000, to develop advanced technologies for future deep-space exploration missions. One of the focus technology development areas is advanced avionics, which is being developed by the Center for Integrated Space Microsystems (CISM) at the Jet Propulsion Laboratory. Under X2000 and CISM, a breakthrough multi-mission avionics system is being developed. This avionics system employs low cost hardware and software products that are widely available in the commercial market. By using COTS through out the system, we expect to significantly reduce both the development cost as well as the recurring cost of the system, and thus be able to meet the faster, better, cheaper challenges. On the other hand, COTS are not specifically developed for applications such as deep-space missions. Therefore, the real challenges are: How to select COTS technologies How to overcome their shortcomings in space applications.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132641145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-17DOI: 10.1109/HASE.1999.809498
Diego Del Gobbo, B. Cukic, M. Napolitano, S. Easterbrook
When high assurance applications are concerned, life cycle process control has witnessed steady improvement over the past two decades. As a consequence, the number of software defects introduced in the later phases of the life cycle, such as detailed design and coding, is decreasing. The majority of the remaining defects originate in the early phases of the life cycle. This is understandable, since the early phases deal with the translation from informal requirements into a formalism that will be used by developers. Since the step from informal to formal notation is inevitable, verification and validation of the requirements continue to be the research focus. Discovering potential problems as early as possible provides the potential for significant reduction in development time and cost. In this paper, the focus is on a specific aspect of requirements validation for dynamic fault tolerant control systems: the feasibility assessment of the fault detection task. An analytical formulation of the fault detectability condition is presented. This formulation is applicable to any system whose dynamics can be approximated by a linear model. The fault detectability condition can be used for objective validation of fault detection requirements. In a case study, we analyze an inverted pendulum system and demonstrate that "reasonable" requirements for a fault detection system can be infeasible when validated against the fault detectability condition.
{"title":"Fault detectability analysis for requirements validation of fault tolerant systems","authors":"Diego Del Gobbo, B. Cukic, M. Napolitano, S. Easterbrook","doi":"10.1109/HASE.1999.809498","DOIUrl":"https://doi.org/10.1109/HASE.1999.809498","url":null,"abstract":"When high assurance applications are concerned, life cycle process control has witnessed steady improvement over the past two decades. As a consequence, the number of software defects introduced in the later phases of the life cycle, such as detailed design and coding, is decreasing. The majority of the remaining defects originate in the early phases of the life cycle. This is understandable, since the early phases deal with the translation from informal requirements into a formalism that will be used by developers. Since the step from informal to formal notation is inevitable, verification and validation of the requirements continue to be the research focus. Discovering potential problems as early as possible provides the potential for significant reduction in development time and cost. In this paper, the focus is on a specific aspect of requirements validation for dynamic fault tolerant control systems: the feasibility assessment of the fault detection task. An analytical formulation of the fault detectability condition is presented. This formulation is applicable to any system whose dynamics can be approximated by a linear model. The fault detectability condition can be used for objective validation of fault detection requirements. In a case study, we analyze an inverted pendulum system and demonstrate that \"reasonable\" requirements for a fault detection system can be infeasible when validated against the fault detectability condition.","PeriodicalId":369187,"journal":{"name":"Proceedings 4th IEEE International Symposium on High-Assurance Systems Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114280769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}