Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925770
M. Ciemian
Failure Reporting, Analysis, and Corrective Action Systems (FRACAS) are employed in many different product areas in both the commercial and military marketplace. FRACAS has been surveyed as the most important reliability task that can be performed on a program. Typically, a FRACAS system consists of a database that captures and documents field failures, depot returns, failure analysis / investigations, and corrective actions. Trends and data from a FRACAS system are used to drive reliability and performance improvements on a program. These improvements are the over arching goal of FRACAS. This paper addresses a FRACAS system in general and as employed by AAI Corporation, the contractor on the U.S. ARMY Shadowreg tactical unmanned aircraft system. (Shadowreg TUAS) Emphasis is placed on data collection, evaluation, and analysis on a medium volume program or product. This paper does not address the merits of the various commercial software packages, but rather shows a philosophy and strategy that can be used to create an effective FRACAS system (which will consist of more than just a failure and corrective action database) in a real world non-ideal data environment.
{"title":"Increasing the effectiveness of FRACAS","authors":"M. Ciemian","doi":"10.1109/RAMS.2008.4925770","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925770","url":null,"abstract":"Failure Reporting, Analysis, and Corrective Action Systems (FRACAS) are employed in many different product areas in both the commercial and military marketplace. FRACAS has been surveyed as the most important reliability task that can be performed on a program. Typically, a FRACAS system consists of a database that captures and documents field failures, depot returns, failure analysis / investigations, and corrective actions. Trends and data from a FRACAS system are used to drive reliability and performance improvements on a program. These improvements are the over arching goal of FRACAS. This paper addresses a FRACAS system in general and as employed by AAI Corporation, the contractor on the U.S. ARMY Shadowreg tactical unmanned aircraft system. (Shadowreg TUAS) Emphasis is placed on data collection, evaluation, and analysis on a medium volume program or product. This paper does not address the merits of the various commercial software packages, but rather shows a philosophy and strategy that can be used to create an effective FRACAS system (which will consist of more than just a failure and corrective action database) in a real world non-ideal data environment.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127590804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925798
C. Băban, M. Baban, I. Radu
As we are moving to more emerging global markets, one of the most important goals of a manufacturer is to improve the reliability of its products. While the reliability may be affected by many potentially factors, some factors are more important and that they have to be identified. The values of the significant factors that can improve reliability are also important to be recommended. Taguchi's robust design experiments provides an efficient way to achieve these goals and the concepts of the Taguchi's method in the context of the reliability improvement are emphasized at the beginning of the paper.
{"title":"Reliability improvement of deformation tools with the Taguchi robust design","authors":"C. Băban, M. Baban, I. Radu","doi":"10.1109/RAMS.2008.4925798","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925798","url":null,"abstract":"As we are moving to more emerging global markets, one of the most important goals of a manufacturer is to improve the reliability of its products. While the reliability may be affected by many potentially factors, some factors are more important and that they have to be identified. The values of the significant factors that can improve reliability are also important to be recommended. Taguchi's robust design experiments provides an efficient way to achieve these goals and the concepts of the Taguchi's method in the context of the reliability improvement are emphasized at the beginning of the paper.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116437494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925842
B. Haan
System modeling language (OMG SysMLTM) is a graphical modeling language that has been developed to describe complex systems. It provides semantics and notations to describe complex systems independent of engineering tools and methodologies. The study summarized in this paper applied the SysML semantics and notation to provide a common reference for examining the interplay of reliability and security in complex systems. This paper briefly outlines elements of the articulation of reliability and security in SysML and presents general findings from the study of their interplay in the context of a hypothetical communication system. This review begins with a demonstration of defining desired system functionality using the SysML use case diagram. Additional use case diagrams are then created to model a malicious agent's desire to either disrupt or gain illegal access to a system. Because the use case can be used to define both legitimate and illegitimate functional applications of the system, reliability and security are identified as coherent concepts. Being coherent, the correlation of reliability and security will depend on their contextual separation. Contextual separation is built through associations from the use case diagram through other SysML constructs. These associations point to operating environment and operational periods linked to a particular use case and provides context for element-level reliability modeling. Functional expectations, operating conditions, and operational periods are linked to parametric diagrams that model individual facets of reliability and security. This contextually embeds reliability and security directly into the system model. The interplay between reliability and security occurs when associations to their embedded facets cross paths in the system model. It is found that the interaction is dependent on the form of the attack selected by a malicious agent. Systems that are highly reliable in the functional sense are typically secure against attacks aimed at simply halting functionality. In contrast, the security of that same system against forms of attack that exploit some system characteristic will depend on the attacker's knowledge of and access to the system.
{"title":"Examination of the interplay of reliability and security using System Modeling Language","authors":"B. Haan","doi":"10.1109/RAMS.2008.4925842","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925842","url":null,"abstract":"System modeling language (OMG SysMLTM) is a graphical modeling language that has been developed to describe complex systems. It provides semantics and notations to describe complex systems independent of engineering tools and methodologies. The study summarized in this paper applied the SysML semantics and notation to provide a common reference for examining the interplay of reliability and security in complex systems. This paper briefly outlines elements of the articulation of reliability and security in SysML and presents general findings from the study of their interplay in the context of a hypothetical communication system. This review begins with a demonstration of defining desired system functionality using the SysML use case diagram. Additional use case diagrams are then created to model a malicious agent's desire to either disrupt or gain illegal access to a system. Because the use case can be used to define both legitimate and illegitimate functional applications of the system, reliability and security are identified as coherent concepts. Being coherent, the correlation of reliability and security will depend on their contextual separation. Contextual separation is built through associations from the use case diagram through other SysML constructs. These associations point to operating environment and operational periods linked to a particular use case and provides context for element-level reliability modeling. Functional expectations, operating conditions, and operational periods are linked to parametric diagrams that model individual facets of reliability and security. This contextually embeds reliability and security directly into the system model. The interplay between reliability and security occurs when associations to their embedded facets cross paths in the system model. It is found that the interaction is dependent on the form of the attack selected by a malicious agent. Systems that are highly reliable in the functional sense are typically secure against attacks aimed at simply halting functionality. In contrast, the security of that same system against forms of attack that exploit some system characteristic will depend on the attacker's knowledge of and access to the system.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125172252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925837
J. McLinn
Reliability predictions have been the subject of much discussion over the prior 20 years. Some articles have proclaimed them to be valueless while other articles suggest importance. Spending a great amount of time calculating numbers does not present value directly. Using the numbers as the basis for additional positive activities would seem to be one reason for predictions. Any reliability prediction should be considered as a single tool in a larger reliability improvement tool box that often feeds other more important activities. This role of predictions in a larger reliability world will be explored here. Examples of follow-on improvement activities include, lessons learned about components, identification of critical components, identification of critical design features, estimation of high-stress conditions, approaches for derating, design for reliability, design for manufacture, input to an FMEA, input to a verification test plan, and warranty and repair estimates. The prediction is not an end of the process, but rather the beginning of the larger reliability improvement and design review process. Here, the value of predictions will be tied to lessons learned and outcomes. Predictions have fundamentally changed over the last 20 years for several reasons. As Failure-in-Time (FIT) numbers have declined in most handbooks, the MTBF prediction didn't always match subsequent field data on an absolute scale. It is possible to be a factor of three different or more. Each successive issue of Telcordia or the Mil Handbook 217 (now 217Plus), appears rather similar to the prior ones. This simplicity masks some of the evolution in numerical content and models. There is much to be learned from a short review of the prediction process itself. Failure rate estimates from tables are not trustworthy for they depend upon experience, customer applications, models and other unknown items. At some point it is time to wrap up the prediction phase and move onto improvement and feed other reliability tools. The ldquoLessons Learnedrdquo based upon knowledge of the design, manufacture, customer environment or are valuable. Items in lessons learned might cover a variety of situations that can enhance or detract from estimated reliability. Other lessons learned are contained in design guidelines, derating standards. All of these should be addressed early in any project, once a Bill of Materials (BOM) has been generated. Each has an impact on the prediction estimate but are not overtly included in the process.
{"title":"Reliability predictions — more than the sum of the parts","authors":"J. McLinn","doi":"10.1109/RAMS.2008.4925837","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925837","url":null,"abstract":"Reliability predictions have been the subject of much discussion over the prior 20 years. Some articles have proclaimed them to be valueless while other articles suggest importance. Spending a great amount of time calculating numbers does not present value directly. Using the numbers as the basis for additional positive activities would seem to be one reason for predictions. Any reliability prediction should be considered as a single tool in a larger reliability improvement tool box that often feeds other more important activities. This role of predictions in a larger reliability world will be explored here. Examples of follow-on improvement activities include, lessons learned about components, identification of critical components, identification of critical design features, estimation of high-stress conditions, approaches for derating, design for reliability, design for manufacture, input to an FMEA, input to a verification test plan, and warranty and repair estimates. The prediction is not an end of the process, but rather the beginning of the larger reliability improvement and design review process. Here, the value of predictions will be tied to lessons learned and outcomes. Predictions have fundamentally changed over the last 20 years for several reasons. As Failure-in-Time (FIT) numbers have declined in most handbooks, the MTBF prediction didn't always match subsequent field data on an absolute scale. It is possible to be a factor of three different or more. Each successive issue of Telcordia or the Mil Handbook 217 (now 217Plus), appears rather similar to the prior ones. This simplicity masks some of the evolution in numerical content and models. There is much to be learned from a short review of the prediction process itself. Failure rate estimates from tables are not trustworthy for they depend upon experience, customer applications, models and other unknown items. At some point it is time to wrap up the prediction phase and move onto improvement and feed other reliability tools. The ldquoLessons Learnedrdquo based upon knowledge of the design, manufacture, customer environment or are valuable. Items in lessons learned might cover a variety of situations that can enhance or detract from estimated reliability. Other lessons learned are contained in design guidelines, derating standards. All of these should be addressed early in any project, once a Bill of Materials (BOM) has been generated. Each has an impact on the prediction estimate but are not overtly included in the process.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128405529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925786
R. Adler, D. Domis, M. Furster, M. Trapp
Dynamic adaptation means that components are reconfigured at run time. Consequently, the degree to which a system fulfils its functional and safety requirements depends on the current system configuration at run time. The probability of a violation of functional requirements in combination with an importance factor for each requirement gives us a measure for reliability. In the same way, the degree of violation of safety requirements can be a measure for safety. These measures can easily be derived based on the probabilities of possible system configurations. For this purpose, we are introducing a new probabilistic analysis technique that determines configuration probabilities based on Fault trees, Binary Decision Diagrams (BDDs) and Markov chains. Through our recent work we have been able to determine configuration probabilities of systems but we neglected timing aspects . Timing delays have impact on the adaptation behavior and are necessary to handle cyclic dependences. The contribution of the present article is to extend analysis towards models with timing delays. This technique builds upon the Methodologies and Architectures for Runtime Adaptive Systems (MARS) , a modeling concept we use for specifying the adaptation behavior of a system at design time. The results of this paper determine configuration probabilities, that are necessary to quantify the fulfillment of functional and safety requirements by adaptive systems.
{"title":"Probabilistic analysis of safety-critical adaptive systems with temporal dependences","authors":"R. Adler, D. Domis, M. Furster, M. Trapp","doi":"10.1109/RAMS.2008.4925786","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925786","url":null,"abstract":"Dynamic adaptation means that components are reconfigured at run time. Consequently, the degree to which a system fulfils its functional and safety requirements depends on the current system configuration at run time. The probability of a violation of functional requirements in combination with an importance factor for each requirement gives us a measure for reliability. In the same way, the degree of violation of safety requirements can be a measure for safety. These measures can easily be derived based on the probabilities of possible system configurations. For this purpose, we are introducing a new probabilistic analysis technique that determines configuration probabilities based on Fault trees, Binary Decision Diagrams (BDDs) and Markov chains. Through our recent work we have been able to determine configuration probabilities of systems but we neglected timing aspects . Timing delays have impact on the adaptation behavior and are necessary to handle cyclic dependences. The contribution of the present article is to extend analysis towards models with timing delays. This technique builds upon the Methodologies and Architectures for Runtime Adaptive Systems (MARS) , a modeling concept we use for specifying the adaptation behavior of a system at design time. The results of this paper determine configuration probabilities, that are necessary to quantify the fulfillment of functional and safety requirements by adaptive systems.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130834791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925830
Bo Yang, Suchang Guo, Ning Ning, Hongzhong Huang
Many existing software reliability models are based on the assumption of statistical independence among successive software failures. In reality, this assumption could be easily violated. In recent years, efforts have been made to relax this unrealistic assumption and a software reliability modeling framework considering failure correlation was developed by Goseva-Popstojanova and Trivedi. However, some important issues that are crucial to the proposed modeling framework to be used in practice remain unstudied, such as the method of estimation of model parameters. In this paper, we study the parameter estimation problem for the software reliability modeling framework developed in. We propose a relationship function among model parameters which could be essential to the reduction of the number of parameters to be estimated as well as to the reliability prediction using the proposed modeling framework. Two parameter estimation methods are developed based on deferent types of data available, using Maximum Likelihood Estimation (MLE) method. Simulation results preliminarily show that the accuracy of both proposed estimation methods seem to be satisfactory.
{"title":"Parameter estimation for software reliability models considering failure correlation","authors":"Bo Yang, Suchang Guo, Ning Ning, Hongzhong Huang","doi":"10.1109/RAMS.2008.4925830","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925830","url":null,"abstract":"Many existing software reliability models are based on the assumption of statistical independence among successive software failures. In reality, this assumption could be easily violated. In recent years, efforts have been made to relax this unrealistic assumption and a software reliability modeling framework considering failure correlation was developed by Goseva-Popstojanova and Trivedi. However, some important issues that are crucial to the proposed modeling framework to be used in practice remain unstudied, such as the method of estimation of model parameters. In this paper, we study the parameter estimation problem for the software reliability modeling framework developed in. We propose a relationship function among model parameters which could be essential to the reduction of the number of parameters to be estimated as well as to the reliability prediction using the proposed modeling framework. Two parameter estimation methods are developed based on deferent types of data available, using Maximum Likelihood Estimation (MLE) method. Simulation results preliminarily show that the accuracy of both proposed estimation methods seem to be satisfactory.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133761687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925827
N. Ozarin
Bent pin analysis is an important kind of failure modes and effects analysis that almost always ignores real-world behavior. The somewhat undisciplined nature of this FMEA means there is heavy reliance on human judgment, and given the particularly tedious nature of the task, the results are typically both incomplete and inaccurate. The analysis also provides oversimplified predictions of failure rates based on averages, or omits them entirely. However, carefully defining analysis rules that more realistically reflect real-world events make it possible for a computer to perform a great deal of the task with far more accuracy. Using these rules, the computer can determine individual failure rates for each permutation of short and open circuits. The computer can also go beyond these computations and do a great deal of additional analysis work, freeing humans to concentrate on circuits and systems instead of pins and wires. Finally, the computer can automatically supply repeated worksheet information - and bent pin FMEA worksheets have a lot of it - so that you never need to enter anything more than once. The result is a far more accurate, consistent, and complete analysis requiring much less effort. It brings bent pin analysis into the 21st century.
{"title":"What's wrong with bent pin analysis, and what to do about it","authors":"N. Ozarin","doi":"10.1109/RAMS.2008.4925827","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925827","url":null,"abstract":"Bent pin analysis is an important kind of failure modes and effects analysis that almost always ignores real-world behavior. The somewhat undisciplined nature of this FMEA means there is heavy reliance on human judgment, and given the particularly tedious nature of the task, the results are typically both incomplete and inaccurate. The analysis also provides oversimplified predictions of failure rates based on averages, or omits them entirely. However, carefully defining analysis rules that more realistically reflect real-world events make it possible for a computer to perform a great deal of the task with far more accuracy. Using these rules, the computer can determine individual failure rates for each permutation of short and open circuits. The computer can also go beyond these computations and do a great deal of additional analysis work, freeing humans to concentrate on circuits and systems instead of pins and wires. Finally, the computer can automatically supply repeated worksheet information - and bent pin FMEA worksheets have a lot of it - so that you never need to enter anything more than once. The result is a far more accurate, consistent, and complete analysis requiring much less effort. It brings bent pin analysis into the 21st century.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113962111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925797
J.B. Farmer, P. Ellner
Reliability is the probability that a system will perform its intended function for a specified period of time in a specified environment. Often, the reliability of a system cannot be directly measured through test to compare against its requirement because of programmatic constraints on testing. In cases where testing cannot adhere exactly to the defined mission profile, it is necessary to normalize the resulting test data to evaluate reliability performance against the system requirement. This paper describes the application of normalization techniques to reliability test data on the marine corps expeditionary fighting vehicle (EFV) program.
{"title":"System reliability evaluation using normalized test data","authors":"J.B. Farmer, P. Ellner","doi":"10.1109/RAMS.2008.4925797","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925797","url":null,"abstract":"Reliability is the probability that a system will perform its intended function for a specified period of time in a specified environment. Often, the reliability of a system cannot be directly measured through test to compare against its requirement because of programmatic constraints on testing. In cases where testing cannot adhere exactly to the defined mission profile, it is necessary to normalize the resulting test data to evaluate reliability performance against the system requirement. This paper describes the application of normalization techniques to reliability test data on the marine corps expeditionary fighting vehicle (EFV) program.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121333353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925785
Tieling Zhang, Yiming Wang, M. Xie
This paper presents a method for analysis of performance indexes of safety-critical systems. It incorporates periodic inspection and repair which occurs just after each time interval into Markov model. This modeling technique is applied to the typical system structures regulated in the standard IEC 61508. Both perfect and imperfect inspections and repairs can be modeled. Through derivation, a variety of important system performance indexes can be obtained in closed form, that include MTTF, MTTFD, MTTFS, average availability, average probability of failure-dangerous, and average probability of failure on demand. The solutions are applied to 1-out-of-2 system structure to illustrate the usefulness of this method in analyzing the system performance, for example, choice of proof-test interval and evaluation on the average probability of failure on demand.
{"title":"Analysis of the performance of safety-critical systems with diagnosis and periodic inspection","authors":"Tieling Zhang, Yiming Wang, M. Xie","doi":"10.1109/RAMS.2008.4925785","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925785","url":null,"abstract":"This paper presents a method for analysis of performance indexes of safety-critical systems. It incorporates periodic inspection and repair which occurs just after each time interval into Markov model. This modeling technique is applied to the typical system structures regulated in the standard IEC 61508. Both perfect and imperfect inspections and repairs can be modeled. Through derivation, a variety of important system performance indexes can be obtained in closed form, that include MTTF, MTTFD, MTTFS, average availability, average probability of failure-dangerous, and average probability of failure on demand. The solutions are applied to 1-out-of-2 system structure to illustrate the usefulness of this method in analyzing the system performance, for example, choice of proof-test interval and evaluation on the average probability of failure on demand.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116084947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925787
Z. Vintr, M. Vintr
This paper deals with safety management for electromechanical systems intended for railway applications. The principles and procedures presented in the article are applied in the Lekov Company which is a prominent producer of electric switching and controlling components used in the construction of railway vehicles. Considering the problems of reliability and safety carefully, the Lekov Company puts into practice the integrated safety and reliability program in the process of development, design and production. This article presents a concise characterization of the implemented safety and reliability integrated program which ensures that in the process of development, design and production of new electromechanical equipment, the defined requirements concerning reliability, availability, maintainability and safety will be fulfilled. Due to the limited space for his article, the authors present that part of Lekov's integrated program that pertains primarily to an effective system safety assurance process. The main methods, which are used as parts of the program, are Preliminary Hazard Analysis, Failure Modes, Effects and Criticality Analysis, Fault Tree Analysis and Reliability Block Diagram Analysis. The methods are used in individual steps that logically connect methods each to other. The article describes the procedures of the program and brings a survey of applied methods with characterizations. Particular attention is devoted to problems that are associated with safety management. The implemented reliability and safety program, even if using relatively simple methods, systematically ensures the fulfillment of the customer's requirements for reliability and safety of the system. The logically linked steps taken before the design stage ensure that the engineering design will be carried out effectively, with an emphasis on fulfillment of the given requirements. The design reliability is then checked out by a reliability assessment that precedes the prototype production. This procedure minimizes the possibility of producing a prototype with a structural defect. Prototype reliability tests detect problems which can be corrected and this decreases the probability of occurrence for significant or critical problems in the systems produced in production.
{"title":"Safety management for electromechanical systems of railway vehicles","authors":"Z. Vintr, M. Vintr","doi":"10.1109/RAMS.2008.4925787","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925787","url":null,"abstract":"This paper deals with safety management for electromechanical systems intended for railway applications. The principles and procedures presented in the article are applied in the Lekov Company which is a prominent producer of electric switching and controlling components used in the construction of railway vehicles. Considering the problems of reliability and safety carefully, the Lekov Company puts into practice the integrated safety and reliability program in the process of development, design and production. This article presents a concise characterization of the implemented safety and reliability integrated program which ensures that in the process of development, design and production of new electromechanical equipment, the defined requirements concerning reliability, availability, maintainability and safety will be fulfilled. Due to the limited space for his article, the authors present that part of Lekov's integrated program that pertains primarily to an effective system safety assurance process. The main methods, which are used as parts of the program, are Preliminary Hazard Analysis, Failure Modes, Effects and Criticality Analysis, Fault Tree Analysis and Reliability Block Diagram Analysis. The methods are used in individual steps that logically connect methods each to other. The article describes the procedures of the program and brings a survey of applied methods with characterizations. Particular attention is devoted to problems that are associated with safety management. The implemented reliability and safety program, even if using relatively simple methods, systematically ensures the fulfillment of the customer's requirements for reliability and safety of the system. The logically linked steps taken before the design stage ensure that the engineering design will be carried out effectively, with an emphasis on fulfillment of the given requirements. The design reliability is then checked out by a reliability assessment that precedes the prototype production. This procedure minimizes the possibility of producing a prototype with a structural defect. Prototype reliability tests detect problems which can be corrected and this decreases the probability of occurrence for significant or critical problems in the systems produced in production.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116216470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}