Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653597
R.H. Gusciora
This paper describes a manufacturer's use of HALT (highly accelerated life test) to help identify the root causes of multiple, intermittent, and complex problems with certain personal computers (PCs) as used in point-of-sale (POS) equipment. In addition to identifying the root causes of "forced" hardware problems by means of these special and severe environmental tests, the paper describes attempts to understand the relationships between the test failures and the experienced factory and field problems. Although the tested PCs have been primarily used in point-of-sale equipment, their hardware is very similar to that of ordinary PCs, so the paper's results are applicable to the average PC user. The paper has three basic conclusions: (1) single-sided board construction, while inexpensive, is not suitable for the very high reliability required by POS applications; (2) tin-plated connectors are not reliable for use in POS equipment, especially for low-force, low-current applications like SIMMS cards; and (3) HALT served as a useful tool for identifying some of the perplexing sources of factory and field problems with PCs. Note that the second conclusion is also applicable to those not in the POS market: (a) tin platings are probably not suitable for certain connectors in home and office PCs; and (b) the data provides a rare example where an accelerated test has quickly demonstrated tin-plated connectors to be troublesome, in situ, in complex electronic systems.
{"title":"The use of HALT to improve computer reliability for point-of-sale equipment","authors":"R.H. Gusciora","doi":"10.1109/RAMS.1998.653597","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653597","url":null,"abstract":"This paper describes a manufacturer's use of HALT (highly accelerated life test) to help identify the root causes of multiple, intermittent, and complex problems with certain personal computers (PCs) as used in point-of-sale (POS) equipment. In addition to identifying the root causes of \"forced\" hardware problems by means of these special and severe environmental tests, the paper describes attempts to understand the relationships between the test failures and the experienced factory and field problems. Although the tested PCs have been primarily used in point-of-sale equipment, their hardware is very similar to that of ordinary PCs, so the paper's results are applicable to the average PC user. The paper has three basic conclusions: (1) single-sided board construction, while inexpensive, is not suitable for the very high reliability required by POS applications; (2) tin-plated connectors are not reliable for use in POS equipment, especially for low-force, low-current applications like SIMMS cards; and (3) HALT served as a useful tool for identifying some of the perplexing sources of factory and field problems with PCs. Note that the second conclusion is also applicable to those not in the POS market: (a) tin platings are probably not suitable for certain connectors in home and office PCs; and (b) the data provides a rare example where an accelerated test has quickly demonstrated tin-plated connectors to be troublesome, in situ, in complex electronic systems.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127143765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653583
John Andrews, L. M. Bartlett
Significant advances have been made in methodologies to analyse the fault tree diagram. The most successful of these developments has been the binary decision diagram (BDD) approach. This approach has been shown to improve both the efficiency of determining the minimal cut sets of the fault tree and also the accuracy of the calculation procedure used to determine the top event parameters. To utilise the BDD approach the fault tree structure is first converted to the BDD format. This conversion can be accomplished efficiently but requires the basic events in the fault tree to be placed in an ordering. A poor ordering can result in a BDD which is not an efficient representation of the fault tree logic structure. The advantages to be gained by utilising the BDD technique rely on the efficiency of the ordering scheme. Alternative ordering schemes have been investigated and no one scheme is appropriate for every tree structure. Research to date has not found any rule based means of determining the best way of ordering basic events for a given fault tree structure. The work presented in this paper takes a machine learning approach based on genetic algorithms to select the most appropriate ordering scheme. Features which describe a fault tree structure have been identified and these provide the inputs to the machine learning algorithm. A set of possible ordering schemes has been selected based on previous heuristic work. The objective of the work detailed in the paper is to predict the most efficient of the possible ordering alternatives from parameters which describe a fault tree structure.
{"title":"Efficient basic event orderings for binary decision diagrams","authors":"John Andrews, L. M. Bartlett","doi":"10.1109/RAMS.1998.653583","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653583","url":null,"abstract":"Significant advances have been made in methodologies to analyse the fault tree diagram. The most successful of these developments has been the binary decision diagram (BDD) approach. This approach has been shown to improve both the efficiency of determining the minimal cut sets of the fault tree and also the accuracy of the calculation procedure used to determine the top event parameters. To utilise the BDD approach the fault tree structure is first converted to the BDD format. This conversion can be accomplished efficiently but requires the basic events in the fault tree to be placed in an ordering. A poor ordering can result in a BDD which is not an efficient representation of the fault tree logic structure. The advantages to be gained by utilising the BDD technique rely on the efficiency of the ordering scheme. Alternative ordering schemes have been investigated and no one scheme is appropriate for every tree structure. Research to date has not found any rule based means of determining the best way of ordering basic events for a given fault tree structure. The work presented in this paper takes a machine learning approach based on genetic algorithms to select the most appropriate ordering scheme. Features which describe a fault tree structure have been identified and these provide the inputs to the machine learning algorithm. A set of possible ordering schemes has been selected based on previous heuristic work. The objective of the work detailed in the paper is to predict the most efficient of the possible ordering alternatives from parameters which describe a fault tree structure.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116085423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653783
T. Kiang
This paper defines the telecommunications (telecom) product use environment conditions and describes the characteristics and technological constraints in high density electronic packaging for telecom product applications. A design approach is documented to guide new technology product development. This project attempts to establish a user-supplier linkage to translate the product use environment conditions into relevant physical design characteristics. Such information is necessary for cost-effective selection of components and materials used in building high density packaging modules. Whereas it is essential to specify the performance limits of a module when used in a product, it becomes imperative to have full knowledge of the end use product environment conditions. The relationship of module design and product application is addressed. The results of this project lay the ground work for further development of the packaging technology trends, broadening of the scope of applications, and harnessing the benefits derived from telecom product investments. The focus here is on realization of a new generation of physical design concepts that involve high density packaging and the selection of appropriate technologies as demanded in a rapidly evolving telecom industry.
{"title":"Telecommunications use environment application","authors":"T. Kiang","doi":"10.1109/RAMS.1998.653783","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653783","url":null,"abstract":"This paper defines the telecommunications (telecom) product use environment conditions and describes the characteristics and technological constraints in high density electronic packaging for telecom product applications. A design approach is documented to guide new technology product development. This project attempts to establish a user-supplier linkage to translate the product use environment conditions into relevant physical design characteristics. Such information is necessary for cost-effective selection of components and materials used in building high density packaging modules. Whereas it is essential to specify the performance limits of a module when used in a product, it becomes imperative to have full knowledge of the end use product environment conditions. The relationship of module design and product application is addressed. The results of this project lay the ground work for further development of the packaging technology trends, broadening of the scope of applications, and harnessing the benefits derived from telecom product investments. The focus here is on realization of a new generation of physical design concepts that involve high density packaging and the selection of appropriate technologies as demanded in a rapidly evolving telecom industry.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122571454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653782
D. Kececioglu, Wendai Wang
In reliability engineering, it is known that electrical and mechanical equipment usually have more than one failure mode or cause. The mixed Weibull distribution is an appropriate distribution to use in modeling the lifetimes of the units that have more than one failure cause. However, due to the lack of a systematic statistical procedure for fitting an appropriate distribution to such a mixed data set, it has not been widely used. A mixed Weibull distribution represents a population that consists of several Weibull subpopulations. In this paper, a new approach is developed to estimate the mixed-Weibull distribution's parameters. At first, the population sample data are split into subpopulation data sets over the whole test duration by using the posterior belonging probability of each observation to each subpopulation. Then, with the new concepts of fracture failure and mean order number, the proposed approach combines the least-squares method with Bayes' theorem, takes advantage of the parameter estimation for single Weibull distribution to each derived subgroup data set, and estimates the parameters of each subpopulation. The proposed approach can also be applied for complete, censored, and grouped data samples. Its superiority is particularly significant when the sample size is relatively small and for the case in which the subpopulations are well mixed. A numerical example is given to compare the proposed method with the conventional plotting method of subpopulation separation. It turns out that the proposed method yields more accurate parameter estimates.
{"title":"Parameter estimation for mixed-Weibull distribution","authors":"D. Kececioglu, Wendai Wang","doi":"10.1109/RAMS.1998.653782","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653782","url":null,"abstract":"In reliability engineering, it is known that electrical and mechanical equipment usually have more than one failure mode or cause. The mixed Weibull distribution is an appropriate distribution to use in modeling the lifetimes of the units that have more than one failure cause. However, due to the lack of a systematic statistical procedure for fitting an appropriate distribution to such a mixed data set, it has not been widely used. A mixed Weibull distribution represents a population that consists of several Weibull subpopulations. In this paper, a new approach is developed to estimate the mixed-Weibull distribution's parameters. At first, the population sample data are split into subpopulation data sets over the whole test duration by using the posterior belonging probability of each observation to each subpopulation. Then, with the new concepts of fracture failure and mean order number, the proposed approach combines the least-squares method with Bayes' theorem, takes advantage of the parameter estimation for single Weibull distribution to each derived subgroup data set, and estimates the parameters of each subpopulation. The proposed approach can also be applied for complete, censored, and grouped data samples. Its superiority is particularly significant when the sample size is relatively small and for the case in which the subpopulations are well mixed. A numerical example is given to compare the proposed method with the conventional plotting method of subpopulation separation. It turns out that the proposed method yields more accurate parameter estimates.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114309792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653814
V. Loll
For many years, prediction according to MIL-HDBK-217 was a major activity in most projects. A prediction was required in the contract, but the customer seldom required the assumptions to be stated in the report and compliance with the assumptions to be verified. This leads to the so called "number game". Today it is a "hot issue" if experience data can be used for anything, and whether a component does have a meaningful hazard rate. This paper discusses which advantages a prediction can offer to a manufacturer and a customer. This leads to the conclusion that instead of prediction one should talk about a reliability budget, which will guide the project manager to the part of the design where early analysis and tests should be made. This reliability budget is made using a Weibull probability paper, so that nonconstant hazard rate can be included in the budget. This allows the inclusion of wear-out in the analysis, allowing a comparison of the reliability budget with the reliability target for the system. Reliability prediction has also been criticized for blocking for improved design techniques. The experience of the well known Danish company Bang and Olufsen, using a modified prediction technique to encourage designers to improve their design technique, is described.
多年来,根据MIL-HDBK-217进行预测是大多数项目的主要活动。合同中要求进行预测,但客户很少要求在报告中说明假设,并验证假设是否符合要求。这就导致了所谓的“数字游戏”。如今,经验数据是否可以用于任何事情,以及某个组件是否具有有意义的危险率,都是一个“热点问题”。本文讨论了预测能为制造商和客户提供哪些优势。由此得出的结论是,人们应该讨论可靠性预算,而不是预测,这将引导项目经理进入应该进行早期分析和测试的设计部分。该可靠性预算采用威布尔概率法,使非恒定的风险率可以包含在预算中。这允许在分析中包含磨损,允许将可靠性预算与系统的可靠性目标进行比较。可靠性预测也被批评为阻碍改进的设计技术。著名的丹麦公司Bang and Olufsen使用一种改进的预测技术来鼓励设计师改进他们的设计技术。
{"title":"From reliability-prediction to a reliability-budgetedddd","authors":"V. Loll","doi":"10.1109/RAMS.1998.653814","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653814","url":null,"abstract":"For many years, prediction according to MIL-HDBK-217 was a major activity in most projects. A prediction was required in the contract, but the customer seldom required the assumptions to be stated in the report and compliance with the assumptions to be verified. This leads to the so called \"number game\". Today it is a \"hot issue\" if experience data can be used for anything, and whether a component does have a meaningful hazard rate. This paper discusses which advantages a prediction can offer to a manufacturer and a customer. This leads to the conclusion that instead of prediction one should talk about a reliability budget, which will guide the project manager to the part of the design where early analysis and tests should be made. This reliability budget is made using a Weibull probability paper, so that nonconstant hazard rate can be included in the budget. This allows the inclusion of wear-out in the analysis, allowing a comparison of the reliability budget with the reliability target for the system. Reliability prediction has also been criticized for blocking for improved design techniques. The experience of the well known Danish company Bang and Olufsen, using a modified prediction technique to encourage designers to improve their design technique, is described.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126474206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653802
D. Lin, T.L. Welsher
Individual events in the use environment such as accidentally dropping a cellular phone or zapping it with human body ESD (electrostatic discharge) are getting more frequent as electronic products are becoming more portable. Monte Carlo simulations of the stress distribution offered by the environment and the product strength distribution are used to derive the infant mortality (early failure) curve. Fitting the slope of the infant mortality curve is an indicator of how far apart the two distributions are and the frequency of individual events. Two new metrics, SIM (severity of infant mortality) and D/sub 5%/, to track infant mortality are proposed. The process to set test-based reliability requirements for achieving a given field return goal is also illustrated.
{"title":"Prediction of product failure rate due to event-related failure mechanisms","authors":"D. Lin, T.L. Welsher","doi":"10.1109/RAMS.1998.653802","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653802","url":null,"abstract":"Individual events in the use environment such as accidentally dropping a cellular phone or zapping it with human body ESD (electrostatic discharge) are getting more frequent as electronic products are becoming more portable. Monte Carlo simulations of the stress distribution offered by the environment and the product strength distribution are used to derive the infant mortality (early failure) curve. Fitting the slope of the infant mortality curve is an indicator of how far apart the two distributions are and the frequency of individual events. Two new metrics, SIM (severity of infant mortality) and D/sub 5%/, to track infant mortality are proposed. The process to set test-based reliability requirements for achieving a given field return goal is also illustrated.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125734752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653703
Dong Nguyen, Irvine, Dar-Biau Liu
This paper discusses the concept of recovery blocks as a dynamic redundancy approach to software fault tolerance. The discussion focuses on the distributed recovery block (DRB) scheme which can be thought of as a means of integrating hardware and software fault tolerance in a single structure. The DRB approach, which combines distributed processing and recovery block concepts, is capable of effecting forward recovery while handling both hardware and software faults in a uniform manner. The DRB was developed for applications such as command and control in which data was collected by interface processors and distributed over a network, and in which data from one pair of processors was output to another pair of processors. The extended distributed recovery blocks (EDRB) is then discussed as a modified scheme of the original DRB for real-time process control applications. The implementation of the EDRB is also presented to acquaint the reader with the implementation issue that must be faced in the development of a fault-tolerant software architecture for a distributed system.
{"title":"Recovery blocks in real-time distributed systems","authors":"Dong Nguyen, Irvine, Dar-Biau Liu","doi":"10.1109/RAMS.1998.653703","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653703","url":null,"abstract":"This paper discusses the concept of recovery blocks as a dynamic redundancy approach to software fault tolerance. The discussion focuses on the distributed recovery block (DRB) scheme which can be thought of as a means of integrating hardware and software fault tolerance in a single structure. The DRB approach, which combines distributed processing and recovery block concepts, is capable of effecting forward recovery while handling both hardware and software faults in a uniform manner. The DRB was developed for applications such as command and control in which data was collected by interface processors and distributed over a network, and in which data from one pair of processors was output to another pair of processors. The extended distributed recovery blocks (EDRB) is then discussed as a modified scheme of the original DRB for real-time process control applications. The implementation of the EDRB is also presented to acquaint the reader with the implementation issue that must be faced in the development of a fault-tolerant software architecture for a distributed system.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130290688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653804
J. McLinn
Step-stress testing has a number of worthwhile applications for the analysis of the projected life for components and occasionally systems. These include the common situations of testing when only a small number of systems are available, when extremely long or specialized test equipment are required, when limited environmental chamber capability and/or test fixtures are involved, and lastly when very expensive support equipment is a test limit. Step-stress testing has had limited applicability in the past. This has been due to improperly described degradation or accumulative fatigue, poor control of the test samples and difficulty with the analysis of the failure data. A tight series of step-stress ground rules are proposed in this paper to solve or mitigate these and other common accelerated test problems. This paper also presents methods to improve the analysis of the step-stress test. While these methods are approximate, they lend themselves to analysis on a computer by standard hand techniques. Additionally, the ground rules presented suggest that wider limits may be taken for running a step stress test than has been suggested in the past. These ground rules aid the analysis by helping limit the range of results. The step intervals need not be of the same length nor stress steps uniform in size.
{"title":"Ways to improve the analysis of step-stress testing","authors":"J. McLinn","doi":"10.1109/RAMS.1998.653804","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653804","url":null,"abstract":"Step-stress testing has a number of worthwhile applications for the analysis of the projected life for components and occasionally systems. These include the common situations of testing when only a small number of systems are available, when extremely long or specialized test equipment are required, when limited environmental chamber capability and/or test fixtures are involved, and lastly when very expensive support equipment is a test limit. Step-stress testing has had limited applicability in the past. This has been due to improperly described degradation or accumulative fatigue, poor control of the test samples and difficulty with the analysis of the failure data. A tight series of step-stress ground rules are proposed in this paper to solve or mitigate these and other common accelerated test problems. This paper also presents methods to improve the analysis of the step-stress test. While these methods are approximate, they lend themselves to analysis on a computer by standard hand techniques. Additionally, the ground rules presented suggest that wider limits may be taken for running a step stress test than has been suggested in the past. These ground rules aid the analysis by helping limit the range of results. The step intervals need not be of the same length nor stress steps uniform in size.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127861104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653698
D. Herrmann
Software quality, safety and reliability metrics should be collected, integrated, and analyzed throughout the development lifecycle so that corrective and preventive action can be taken in a timely and cost effective manner. It is too late to wait until the testing phase to collect and assess software quality information, particularly for mission critical systems. It is inadequate and can be misleading to only use the results obtained from testing to make a software safety or reliability assessment. To remedy this situation a holistic model which captures, integrates and analyzes product, process, and people/resource (P/sup 3/R) metrics, as recommended by B. Littlewood (1993), is needed. This paper defines one such possible implementation.
{"title":"Sample implementation of the Littlewood holistic model for assessing software quality, safety and reliability","authors":"D. Herrmann","doi":"10.1109/RAMS.1998.653698","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653698","url":null,"abstract":"Software quality, safety and reliability metrics should be collected, integrated, and analyzed throughout the development lifecycle so that corrective and preventive action can be taken in a timely and cost effective manner. It is too late to wait until the testing phase to collect and assess software quality information, particularly for mission critical systems. It is inadequate and can be misleading to only use the results obtained from testing to make a software safety or reliability assessment. To remedy this situation a holistic model which captures, integrates and analyzes product, process, and people/resource (P/sup 3/R) metrics, as recommended by B. Littlewood (1993), is needed. This paper defines one such possible implementation.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117197027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653780
L. Ten, M. Xie
One reason that the Bayesian approach to reliability demonstration has not gained popularity in industry is the difficulty in establishing the prior. The problem becomes more complicated when only subsystem data are available. It has received little attention in the existing literature and this paper makes an attempt to do that. A method is proposed to derive the Bayesian reliability demonstration test plan for series systems with binomial subsystem data. The method makes use of Mann's approximately optimum lower confidence bound model to derive the system prior based on binomial subsystem data. The system Bayesian reliability demonstration test plan can then be derived using existing methods for meeting posterior confidence requirements. The proposed method is easy to apply and no complicated computation is involved in deriving the system prior distribution. It uses objective subsystem test data. No subjective judgement is required. This method is most beneficial for systems that already have substantial subsystem test data before the reliability demonstration.
{"title":"Bayes reliability demonstration test plan for series-systems with binomial subsystem data","authors":"L. Ten, M. Xie","doi":"10.1109/RAMS.1998.653780","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653780","url":null,"abstract":"One reason that the Bayesian approach to reliability demonstration has not gained popularity in industry is the difficulty in establishing the prior. The problem becomes more complicated when only subsystem data are available. It has received little attention in the existing literature and this paper makes an attempt to do that. A method is proposed to derive the Bayesian reliability demonstration test plan for series systems with binomial subsystem data. The method makes use of Mann's approximately optimum lower confidence bound model to derive the system prior based on binomial subsystem data. The system Bayesian reliability demonstration test plan can then be derived using existing methods for meeting posterior confidence requirements. The proposed method is easy to apply and no complicated computation is involved in deriving the system prior distribution. It uses objective subsystem test data. No subjective judgement is required. This method is most beneficial for systems that already have substantial subsystem test data before the reliability demonstration.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126140802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}