Pub Date : 2000-08-06DOI: 10.1109/RAMS.2000.816334
L. Walls, J. Quigley
Now that electronic components have demonstrated high reliability, attention has centered upon enhancing the reliability of electronic systems. We introduce a modeling framework to support decision-making during electronic systems design with a view to enhancing operational reliability. We differentiate our work from those models that seek only to provide reliability predictions. Our premise is that modeling can be used to give a better understanding of the impact of engineering decisions on those factors affecting reliability. Through modeling, the decision-maker is encouraged to reflect upon the consequences of actions to learn how a design might be enhanced. The model formulation and data management processes are described for an assumed evolutionary design process. Bayesian approaches are used to combine data types and sources. Exploratory data analysis identifies those factors affecting operational reliability. Expert knowledge is elicited to assess how these factors might impact upon proposed designs. Statistical inference procedures are used to support an assessment of risks associated with design decisions. Applications to the design of electronic systems for aircraft illustrate the usefulness of the model. On-going research is being conducted to fully evaluate the proposed approach.
{"title":"Learning to enhance reliability of electronic systems through effective modeling and risk assessment","authors":"L. Walls, J. Quigley","doi":"10.1109/RAMS.2000.816334","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816334","url":null,"abstract":"Now that electronic components have demonstrated high reliability, attention has centered upon enhancing the reliability of electronic systems. We introduce a modeling framework to support decision-making during electronic systems design with a view to enhancing operational reliability. We differentiate our work from those models that seek only to provide reliability predictions. Our premise is that modeling can be used to give a better understanding of the impact of engineering decisions on those factors affecting reliability. Through modeling, the decision-maker is encouraged to reflect upon the consequences of actions to learn how a design might be enhanced. The model formulation and data management processes are described for an assumed evolutionary design process. Bayesian approaches are used to combine data types and sources. Exploratory data analysis identifies those factors affecting operational reliability. Expert knowledge is elicited to assess how these factors might impact upon proposed designs. Statistical inference procedures are used to support an assessment of risks associated with design decisions. Applications to the design of electronic systems for aircraft illustrate the usefulness of the model. On-going research is being conducted to fully evaluate the proposed approach.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122079284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-26DOI: 10.1109/RAMS.2000.816337
S. Cornford, K. Hicks
NASA's need to infuse new technologies into its missions has been described. Some of the challenges associated with new technology infusion, and a way to meet those challenges, have been presented. The Technology Infusion Guideline (TIG) process has been described as well as the Defect Detection and Prevention (DDP) process that is the underlying evaluation 'engine'. An example of this under evaluation on one of NASA's technologies development has been presented. This example is used to illustrate the generic process. The results of implementing the TIG process on the example technology clearly demonstrates that the TIG process can penetrate to underlying technical details to evaluate the viability of continued technology development resources. The technology evaluated was deemed 'on the right track' and critical to NASA's future missions needs. The TIG process results in a technology infusion roadmap, or prioritized set of activities which must be performed to address the identified residual risks. These activities include alignment with other parallel technology development work, specific characterization and testing, breadboard development and miniaturization and ruggedization. The return on investment for implementing this process has been measured at over 20:1 with significant schedule savings. The risk reduction as a result of implementing this process will only be directly measurable after the technology matures to a greater extent.
{"title":"Evaluating the residual risks of infusing new technologies into NASA missions","authors":"S. Cornford, K. Hicks","doi":"10.1109/RAMS.2000.816337","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816337","url":null,"abstract":"NASA's need to infuse new technologies into its missions has been described. Some of the challenges associated with new technology infusion, and a way to meet those challenges, have been presented. The Technology Infusion Guideline (TIG) process has been described as well as the Defect Detection and Prevention (DDP) process that is the underlying evaluation 'engine'. An example of this under evaluation on one of NASA's technologies development has been presented. This example is used to illustrate the generic process. The results of implementing the TIG process on the example technology clearly demonstrates that the TIG process can penetrate to underlying technical details to evaluate the viability of continued technology development resources. The technology evaluated was deemed 'on the right track' and critical to NASA's future missions needs. The TIG process results in a technology infusion roadmap, or prioritized set of activities which must be performed to address the identified residual risks. These activities include alignment with other parallel technology development work, specific characterization and testing, breadboard development and miniaturization and ruggedization. The return on investment for implementing this process has been measured at over 20:1 with significant schedule savings. The risk reduction as a result of implementing this process will only be directly measurable after the technology matures to a greater extent.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114433653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816285
H. Kortelainen, J. Salmikuukka, S. Pursio
The reliability model presented in this paper describes a complex industrial system, which contains several intermediate storages. The model utilizes industrial data that in this case is mainly derived from engineer judgements. A method to incorporate intermediate storages into the reliability model, and the estimation of the influence of the storage capacity on the system reliability is presented. A mathematical description of an industrial system calls for numerous variables and dependencies, and analytical results are difficult to obtain. Simulation has proven to be an effective tool for analyzing the availability performance of industrial systems with intermediate storages. Bottlenecks of the production-sub-systems or pieces of equipment-can be easily found and alternative improvement strategies can be compared. A comprehensive and correctly constructed reliability model offers a new tool especially for the persons responsible for the maintenance planning and process design, and utilization of the model supports the decision making when improvements are planned. Model construction requires detailed knowledge of the system under study and the failure- and repair time distributions and their parameters must be known. The importance of reliable information has to be emphasized as incorrect distribution parameters in a simulation model can lead to misleading results.
{"title":"Reliability simulation model for systems with multiple intermediate storages","authors":"H. Kortelainen, J. Salmikuukka, S. Pursio","doi":"10.1109/RAMS.2000.816285","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816285","url":null,"abstract":"The reliability model presented in this paper describes a complex industrial system, which contains several intermediate storages. The model utilizes industrial data that in this case is mainly derived from engineer judgements. A method to incorporate intermediate storages into the reliability model, and the estimation of the influence of the storage capacity on the system reliability is presented. A mathematical description of an industrial system calls for numerous variables and dependencies, and analytical results are difficult to obtain. Simulation has proven to be an effective tool for analyzing the availability performance of industrial systems with intermediate storages. Bottlenecks of the production-sub-systems or pieces of equipment-can be easily found and alternative improvement strategies can be compared. A comprehensive and correctly constructed reliability model offers a new tool especially for the persons responsible for the maintenance planning and process design, and utilization of the model supports the decision making when improvements are planned. Model construction requires detailed knowledge of the system under study and the failure- and repair time distributions and their parameters must be known. The importance of reliable information has to be emphasized as incorrect distribution parameters in a simulation model can lead to misleading results.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115161144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816315
M. Silverman
HALT (highly accelerated life testing) and HASS (highly accelerated stress screen) are two very powerful tools that can help manufacturers achieve high reliability quickly both in the design phase and in the manufacturing phase. HALT is used in the design phase to help reduce the number of design-related problems. HASS is used in the production phase to help reduce the number of infant mortality types of failures. HALT is always performed prior to developing a HASS profile because the HASS profile uses the information from HALT when choosing the profile parameters. Screens for HASS are always developed using a HASS development process. The goal of HASS development is to provide the most effective and quickest screen possible. The effectiveness of the screen is measured in its ability to find defects in the product without removing significant life. This paper describes different methods of developing a screen using the HASS development methodology and gives guidelines on when to change a screen and when it is necessary to re-submit a product through the HASS development process in order to reprove a screen.
{"title":"HASS development method: screen development, change schedule, and re-prove schedule","authors":"M. Silverman","doi":"10.1109/RAMS.2000.816315","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816315","url":null,"abstract":"HALT (highly accelerated life testing) and HASS (highly accelerated stress screen) are two very powerful tools that can help manufacturers achieve high reliability quickly both in the design phase and in the manufacturing phase. HALT is used in the design phase to help reduce the number of design-related problems. HASS is used in the production phase to help reduce the number of infant mortality types of failures. HALT is always performed prior to developing a HASS profile because the HASS profile uses the information from HALT when choosing the profile parameters. Screens for HASS are always developed using a HASS development process. The goal of HASS development is to provide the most effective and quickest screen possible. The effectiveness of the screen is measured in its ability to find defects in the product without removing significant life. This paper describes different methods of developing a screen using the HASS development methodology and gives guidelines on when to change a screen and when it is necessary to re-submit a product through the HASS development process in order to reprove a screen.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125551900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816278
C. Carreras, I. Walker
In this paper, the authors present and discuss a new interval-based method of reliability estimation using fault trees for the case of uncertain and time-varying input reliability data. The approach is based on the generation of output distributions (probability estimates with appropriate ranges of uncertainty) which preserve the effects of uncertainty in the input (component or subsystem-level) data. The input data is represented using appropriate interval-based structures, and formal interval analysis is used in the propagation of the data, via fault trees. The authors show that the method avoids the key problem of loss of uncertainty inherent in some previously suggested approaches for the time-varying case. They further show that the method is more computationally efficient than methods proposed previously to solve the above problem. The method is illustrated using an example of reliability estimation for a robot manipulator system.
{"title":"Interval methods for improved robot reliability estimation","authors":"C. Carreras, I. Walker","doi":"10.1109/RAMS.2000.816278","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816278","url":null,"abstract":"In this paper, the authors present and discuss a new interval-based method of reliability estimation using fault trees for the case of uncertain and time-varying input reliability data. The approach is based on the generation of output distributions (probability estimates with appropriate ranges of uncertainty) which preserve the effects of uncertainty in the input (component or subsystem-level) data. The input data is represented using appropriate interval-based structures, and formal interval analysis is used in the propagation of the data, via fault trees. The authors show that the method avoids the key problem of loss of uncertainty inherent in some previously suggested approaches for the time-varying case. They further show that the method is more computationally efficient than methods proposed previously to solve the above problem. The method is illustrated using an example of reliability estimation for a robot manipulator system.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129115496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816290
R.D. Youngk
This paper provides an investigation of the effectiveness of the oil change, one of the most basic procedures for automobile engine preventive maintenance. The analysis in this paper is based on a review of oil, engine, and bearing technologies and on a survey of vehicle operators. The oil development, specification and approval process is also discussed. The results indicate engine reliability is significantly dependent on the period between oil changes. Surprisingly, the survey data shows that oil changes, when too frequent, can reduce the expected life of an automobile engine. The unexpected outcome is supported by lubrication technology literature. Changing engine oil at the proper mileage can improve engine reliability and has the potential to reduce nationwide waste and recycled oil by 325 million gallons annually. Despite more demanding conditions, engine reliability has also improved. Many automobile operators change engine oil more frequently than required by the manufacturer. All automobile manufacturer's provide oil change mileage recommendations which are based on the climate and the type of driving. A query in the operator's manuals is used to determine this mileage and most manufacturers require oil changes at about 7500 miles for "normal service" or 3000 miles for "severe service". This paper concludes that automobile engine reliability will be improved by using these recommended oil drain intervals with potential results of a significant nationwide reduction in waste and recycled oil.
{"title":"Automobile engine reliability, maintainability and oil maintenance","authors":"R.D. Youngk","doi":"10.1109/RAMS.2000.816290","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816290","url":null,"abstract":"This paper provides an investigation of the effectiveness of the oil change, one of the most basic procedures for automobile engine preventive maintenance. The analysis in this paper is based on a review of oil, engine, and bearing technologies and on a survey of vehicle operators. The oil development, specification and approval process is also discussed. The results indicate engine reliability is significantly dependent on the period between oil changes. Surprisingly, the survey data shows that oil changes, when too frequent, can reduce the expected life of an automobile engine. The unexpected outcome is supported by lubrication technology literature. Changing engine oil at the proper mileage can improve engine reliability and has the potential to reduce nationwide waste and recycled oil by 325 million gallons annually. Despite more demanding conditions, engine reliability has also improved. Many automobile operators change engine oil more frequently than required by the manufacturer. All automobile manufacturer's provide oil change mileage recommendations which are based on the climate and the type of driving. A query in the operator's manuals is used to determine this mileage and most manufacturers require oil changes at about 7500 miles for \"normal service\" or 3000 miles for \"severe service\". This paper concludes that automobile engine reliability will be improved by using these recommended oil drain intervals with potential results of a significant nationwide reduction in waste and recycled oil.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123517196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816299
J. W. Fulton, R. Abernethy
New methods developed by the authors improve data analysis and reliability prediction accuracy when using maximum likelihood estimates (MLE) particularly for small samples. The Fulton factor (FF) modifies the likelihood ratio test to reduce nonconservative bias when measuring difference between designs. The reduced bias adjustment (RBA) factor decreases bias in distribution parameter estimates for better reliability and lifetime predictions. Finally, a postulated relationship designated the justified likelihood function (JLF) reduces confidence contour bias for better confidence interval estimates and for use in graphical comparison of design alternatives. Monte Carlo simulation provides the basis for these conclusions. The results herein apply to complete samples, but also work well with suspensions using failure quantity only as the sample size. Additional research into data with suspensions is desirable.
{"title":"Likelihood adjustment: a simple method for better forecasting from small samples","authors":"J. W. Fulton, R. Abernethy","doi":"10.1109/RAMS.2000.816299","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816299","url":null,"abstract":"New methods developed by the authors improve data analysis and reliability prediction accuracy when using maximum likelihood estimates (MLE) particularly for small samples. The Fulton factor (FF) modifies the likelihood ratio test to reduce nonconservative bias when measuring difference between designs. The reduced bias adjustment (RBA) factor decreases bias in distribution parameter estimates for better reliability and lifetime predictions. Finally, a postulated relationship designated the justified likelihood function (JLF) reduces confidence contour bias for better confidence interval estimates and for use in graphical comparison of design alternatives. Monte Carlo simulation provides the basis for these conclusions. The results herein apply to complete samples, but also work well with suspensions using failure quantity only as the sample size. Additional research into data with suspensions is desirable.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115782406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816298
L. Klyatis, O.I. Teskin, J. W. Fulton
This paper discusses the problem of system reliability prediction by accelerated testing results on its components. This problem appears when the testing of a system as a whole either has high cost or may be impossible in a short period of time, especially at the beginning of development. A multi-variate Weibull model is proposed to utilize testing results of components in predicting system reliability with reduced test length and minimized cost. Prediction of system reliability in a given situation means that we must know how to calculate not only the point estimation of reliability index, but also the lower confidence bound (LCB) with given confidence probability q (q=0.8-0.95). In this case, the authors' goal is to give the algorithm for calculating not by approximation for LCB (based as a rule on normal distribution of point reliability estimation), but exact LCB corresponding to a given confidence probability.
{"title":"Multi-variate Weibull model for predicting system-reliability, from testing results of the components","authors":"L. Klyatis, O.I. Teskin, J. W. Fulton","doi":"10.1109/RAMS.2000.816298","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816298","url":null,"abstract":"This paper discusses the problem of system reliability prediction by accelerated testing results on its components. This problem appears when the testing of a system as a whole either has high cost or may be impossible in a short period of time, especially at the beginning of development. A multi-variate Weibull model is proposed to utilize testing results of components in predicting system reliability with reduced test length and minimized cost. Prediction of system reliability in a given situation means that we must know how to calculate not only the point estimation of reliability index, but also the lower confidence bound (LCB) with given confidence probability q (q=0.8-0.95). In this case, the authors' goal is to give the algorithm for calculating not by approximation for LCB (based as a rule on normal distribution of point reliability estimation), but exact LCB corresponding to a given confidence probability.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134308957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816301
Wendai Wang, D. Kececioglu
The inherent availability, is an important performance index for a repairable system, and is usually estimated from the times-between-failures and the times-to-restore data. The formula for calculating a point estimate of the inherent availability from collected data is well known. But the quality of the calculated inherent availability is suspect because of small data sample sizes. The solution is to use the confidence limits on the inherent availability at a given confidence level, in addition to the point estimator. However, there is no easy way to compute the confidence limits on the calculated availability. Actually, no adequate approach to compute the confidence interval for the inherent availability, based on sample data, is available. In this paper, the uncertainties of small random samples are taken into account. The estimated mean times between failures, mean times to restore and the estimated inherent availability are treated as random variables. When the distributions of both times-between-failures and times-to-restore are exponential, the exact confidence limits on the inherent availability are derived. Based on reasonable assumptions, a nonparametric method of determining the approximate confidence limits on the inherent availability from data are proposed, without assuming any times-between-failures and times-to-restore distributions. Numerical examples are provided to demonstrate the validity of the proposed solution, which are compared with the results obtained from Monte Carlo simulations. It turns out that the proposed method yields satisfactory accuracy for engineering applications.
{"title":"Confidence limits on the inherent availability of equipment","authors":"Wendai Wang, D. Kececioglu","doi":"10.1109/RAMS.2000.816301","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816301","url":null,"abstract":"The inherent availability, is an important performance index for a repairable system, and is usually estimated from the times-between-failures and the times-to-restore data. The formula for calculating a point estimate of the inherent availability from collected data is well known. But the quality of the calculated inherent availability is suspect because of small data sample sizes. The solution is to use the confidence limits on the inherent availability at a given confidence level, in addition to the point estimator. However, there is no easy way to compute the confidence limits on the calculated availability. Actually, no adequate approach to compute the confidence interval for the inherent availability, based on sample data, is available. In this paper, the uncertainties of small random samples are taken into account. The estimated mean times between failures, mean times to restore and the estimated inherent availability are treated as random variables. When the distributions of both times-between-failures and times-to-restore are exponential, the exact confidence limits on the inherent availability are derived. Based on reasonable assumptions, a nonparametric method of determining the approximate confidence limits on the inherent availability from data are proposed, without assuming any times-between-failures and times-to-restore distributions. Numerical examples are provided to demonstrate the validity of the proposed solution, which are compared with the results obtained from Monte Carlo simulations. It turns out that the proposed method yields satisfactory accuracy for engineering applications.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128868164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816325
Meng-Lai Yin, C. L. Hyde, L. E. James
Safety and reliability are two interrelated attributes for safety-critical systems. While the typical safety analysis focuses on preventing hazards associated with erroneous safety critical outputs, this paper introduces an equally important hazard for the loss of critical functionality, referred to as the "loss-of-function" hazard. Tradeoffs are studied among three safety/reliability measures, i.e., the probability of working correctly, the probability of generating erroneous outputs and the probability of losing critical functionality. One of the goals for this study is to assist system engineers in making correct and timely design decisions. A major problem encountered in computing the probabilities of the various safety hazards is the initial condition consideration. This is because a fault-tolerant system can have various operational conditions and a hazard can occur under any of the working conditions, each with different probabilities. To provide a reasonable estimation, a measuring method that incorporates all possible initial conditions is proposed.
{"title":"Reliability-related safety analyses for satellite navigation systems","authors":"Meng-Lai Yin, C. L. Hyde, L. E. James","doi":"10.1109/RAMS.2000.816325","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816325","url":null,"abstract":"Safety and reliability are two interrelated attributes for safety-critical systems. While the typical safety analysis focuses on preventing hazards associated with erroneous safety critical outputs, this paper introduces an equally important hazard for the loss of critical functionality, referred to as the \"loss-of-function\" hazard. Tradeoffs are studied among three safety/reliability measures, i.e., the probability of working correctly, the probability of generating erroneous outputs and the probability of losing critical functionality. One of the goals for this study is to assist system engineers in making correct and timely design decisions. A major problem encountered in computing the probabilities of the various safety hazards is the initial condition consideration. This is because a fault-tolerant system can have various operational conditions and a hazard can occur under any of the working conditions, each with different probabilities. To provide a reasonable estimation, a measuring method that incorporates all possible initial conditions is proposed.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131658609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}