Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500675
Siang-Ying Choy, J. English, T. L. Landers, Li Yan
This paper defines the functional requirements of a computerized decision support system (DSS) necessary to serve as a working tool in assisting the reliability engineer in equipment repair/replacement management of material handling equipment. It incorporates the knowledge required to build analytical models of complex material handling systems using known techniques of system modeling. The system can be used to guide the data analysis and statistical modeling for both repairable and nonrepairable systems. This project aims at bridging the existing gap between the reliability theory and practical applications in the field of reliability engineering. A prototype of this computerized system is being developed (to be tested at research sponsors locations). The nonhomogenous Poisson process (NHPP), renewal process, Prentice, Williams and Peterson (PWP) model and proportional hazards models are implemented accordingly in this project. The complete knowledge base addresses both parametric and nonparametric methods of reliability estimation. An additional focus of this project involves research on the left-truncated power law intensity model for cases where the early lifetime data are missing. The software is being developed to support each of these models. Case studies are conducted based upon the failure data of industrial forklift trucks. These analyses are being utilized to verify the design of the DSS. In the presentation, we present the organization of the DSS and present one of the case studies.
本文定义了计算机化决策支持系统(DSS)的功能需求,该系统必须作为辅助可靠性工程师进行物料搬运设备维修/更换管理的工作工具。它结合了使用已知的系统建模技术建立复杂材料处理系统分析模型所需的知识。该系统可用于指导可修和不可修系统的数据分析和统计建模。本项目旨在弥补可靠性工程领域中可靠性理论与实际应用之间的差距。这个计算机化系统的原型正在开发中(将在研究资助地点进行测试)。本文采用非齐次泊松过程(NHPP)、更新过程、Prentice, Williams and Peterson (PWP)模型和比例风险模型。完整的知识库涉及可靠性估计的参数和非参数方法。本项目的另一个重点涉及对早期生命期数据缺失情况下的左截尾幂律强度模型的研究。软件正在开发以支持这些模型。基于工业叉车的故障数据进行了案例分析。目前正在利用这些分析来验证决策支助系统的设计。在报告中,我们介绍了决策支持系统的组织,并提出了一个案例研究。
{"title":"Collective approach for modeling complex system failures","authors":"Siang-Ying Choy, J. English, T. L. Landers, Li Yan","doi":"10.1109/RAMS.1996.500675","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500675","url":null,"abstract":"This paper defines the functional requirements of a computerized decision support system (DSS) necessary to serve as a working tool in assisting the reliability engineer in equipment repair/replacement management of material handling equipment. It incorporates the knowledge required to build analytical models of complex material handling systems using known techniques of system modeling. The system can be used to guide the data analysis and statistical modeling for both repairable and nonrepairable systems. This project aims at bridging the existing gap between the reliability theory and practical applications in the field of reliability engineering. A prototype of this computerized system is being developed (to be tested at research sponsors locations). The nonhomogenous Poisson process (NHPP), renewal process, Prentice, Williams and Peterson (PWP) model and proportional hazards models are implemented accordingly in this project. The complete knowledge base addresses both parametric and nonparametric methods of reliability estimation. An additional focus of this project involves research on the left-truncated power law intensity model for cases where the early lifetime data are missing. The software is being developed to support each of these models. Case studies are conducted based upon the failure data of industrial forklift trucks. These analyses are being utilized to verify the design of the DSS. In the presentation, we present the organization of the DSS and present one of the case studies.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115677993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500676
G. Johnston
Many practitioners of component and system reliability are not aware that powerful statistical tools for the analysis of reliability data have been made practical by the availability of inexpensive desk top computers. Software and computational power are available to apply computationally intensive statistical and graphical techniques to reliability data analysis problems. This benefits the industrial statistician or reliability engineer by allowing the use of versatile and accurate methods that apply to many different types of data that are encountered in reliability data analysis. In this paper we apply some of the most useful statistical and graphical techniques to examples of life data, accelerated test data, and repairable system data using new software available in the SAS system. The trend of applying computationally intensive techniques to reliability data analysis will undoubtably continue as more workers recognize the need for creative software to address problems in reliability data analysis.
{"title":"Computational methods for reliability data analysis","authors":"G. Johnston","doi":"10.1109/RAMS.1996.500676","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500676","url":null,"abstract":"Many practitioners of component and system reliability are not aware that powerful statistical tools for the analysis of reliability data have been made practical by the availability of inexpensive desk top computers. Software and computational power are available to apply computationally intensive statistical and graphical techniques to reliability data analysis problems. This benefits the industrial statistician or reliability engineer by allowing the use of versatile and accurate methods that apply to many different types of data that are encountered in reliability data analysis. In this paper we apply some of the most useful statistical and graphical techniques to examples of life data, accelerated test data, and repairable system data using new software available in the SAS system. The trend of applying computationally intensive techniques to reliability data analysis will undoubtably continue as more workers recognize the need for creative software to address problems in reliability data analysis.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116853626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500664
Marc Bouissou
Binary decision diagrams (BDD) have made a noticeable entry in the RAMS field. This kind of representation for Boolean functions makes possible the assessment of complex fault-trees, both qualitatively (minimal cutsets search) and quantitatively (exact calculation top event probability). Any Boolean function, and in particular any fault-tree, whether coherent or not, can be represented by a BDD. The BDD is a canonical representation of the function, as soon as one has chosen a variable (i.e., in the fault-tree case, basic event) ordering. Tools based on the use of BDDs, like METAPRIME, or ARALIA, can in some cases give more accurate results than conventional tools, while running 1000 times faster. EDF has investigated this kind of technology, and tested METAPRIME, ARALIA, and other tools based on BDDs, in the framework of cooperations with the BULL company and with the Bordeaux University. These tests have demonstrated that the size of the BDD, that has to be built thoroughly before any kind of assessment can begin, is dramatically sensitive to the ordering chosen for the variables. For a given fault-tree, this size may vary by several orders of magnitude. This can lead to excessive needs, both in terms of memory and CPU time. The problem of finding an optimal ordering being untractable for real applications, many heuristics have been proposed, in order to find acceptable orderings, at low cost (in terms of computing requirements).
{"title":"An ordering heuristic for building binary decision diagrams from fault-trees","authors":"Marc Bouissou","doi":"10.1109/RAMS.1996.500664","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500664","url":null,"abstract":"Binary decision diagrams (BDD) have made a noticeable entry in the RAMS field. This kind of representation for Boolean functions makes possible the assessment of complex fault-trees, both qualitatively (minimal cutsets search) and quantitatively (exact calculation top event probability). Any Boolean function, and in particular any fault-tree, whether coherent or not, can be represented by a BDD. The BDD is a canonical representation of the function, as soon as one has chosen a variable (i.e., in the fault-tree case, basic event) ordering. Tools based on the use of BDDs, like METAPRIME, or ARALIA, can in some cases give more accurate results than conventional tools, while running 1000 times faster. EDF has investigated this kind of technology, and tested METAPRIME, ARALIA, and other tools based on BDDs, in the framework of cooperations with the BULL company and with the Bordeaux University. These tests have demonstrated that the size of the BDD, that has to be built thoroughly before any kind of assessment can begin, is dramatically sensitive to the ordering chosen for the variables. For a given fault-tree, this size may vary by several orders of magnitude. This can lead to excessive needs, both in terms of memory and CPU time. The problem of finding an optimal ordering being untractable for real applications, many heuristics have been proposed, in order to find acceptable orderings, at low cost (in terms of computing requirements).","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125886204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500653
J. Fragola
The authors describe how NASA appears to be undergoing a paradigm shift in its approach to Space Shuttle risk management. At least in some quarters, there appears to be a recognition that advances in the state-of-the-art have now made quantitative risk assessments powerful risk management tools, especially for programs such as the Shuttle with its ever increasing flight and test history data sets and its ever shrinking operational budget.
{"title":"Space Shuttle program risk management","authors":"J. Fragola","doi":"10.1109/RAMS.1996.500653","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500653","url":null,"abstract":"The authors describe how NASA appears to be undergoing a paradigm shift in its approach to Space Shuttle risk management. At least in some quarters, there appears to be a recognition that advances in the state-of-the-art have now made quantitative risk assessments powerful risk management tools, especially for programs such as the Shuttle with its ever increasing flight and test history data sets and its ever shrinking operational budget.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122048735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500668
J. A. McLinn
A new medical blood analyzer (laboratory instrument) was designed, developed and readied for production. Near the end of this series of activities, ten prototype systems exhibited lower than desired reliability in a development test. It was unclear to the designers why this had occurred as "the best design criteria" were employed. A short, controlled, full life test was proposed as a means to quantify the probable types and frequency of failure. This paper details the reliability findings at this point as well as methods for improvement of the typical product. Some of the technical choices and tradeoffs for reliability, maintainability, field performance, costs and quality as well as engineering decisions associated with reliability and field support are identified. Close attention is paid to the identification of a small number of important reliability measures. The reliability, maintainability, support and improvement data should prove highly instructive for any commercial or consumer company wishing to justify starting or continuing the reliability improvement process. Benchmark data is presented to aid others in establishing progress points during development. The data represents a blend of several similar systems.
{"title":"Reliability development and improvement of a medical instrument","authors":"J. A. McLinn","doi":"10.1109/RAMS.1996.500668","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500668","url":null,"abstract":"A new medical blood analyzer (laboratory instrument) was designed, developed and readied for production. Near the end of this series of activities, ten prototype systems exhibited lower than desired reliability in a development test. It was unclear to the designers why this had occurred as \"the best design criteria\" were employed. A short, controlled, full life test was proposed as a means to quantify the probable types and frequency of failure. This paper details the reliability findings at this point as well as methods for improvement of the typical product. Some of the technical choices and tradeoffs for reliability, maintainability, field performance, costs and quality as well as engineering decisions associated with reliability and field support are identified. Close attention is paid to the identification of a small number of important reliability measures. The reliability, maintainability, support and improvement data should prove highly instructive for any commercial or consumer company wishing to justify starting or continuing the reliability improvement process. Benchmark data is presented to aid others in establishing progress points during development. The data represents a blend of several similar systems.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129605472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500640
C. Price
Design FMEA of electrical systems is a costly and labour intensive process. Ideally it would be done when the electrical system is first designed, and repeated whenever any change is made to the design. Because of the cost, this has not been possible in the past. This paper describes how an existing tool for automating electrical design failure mode and effects analysis (FMEA) can be augmented to make incremental design FMEA much less of a burden for the engineer. The tool is able to generate the effects for each failure mode and to assign significance values to the effects. The first time that it is run on a design, the engineer still has quite a lot of work to do, examining the results and deciding what actions need to be taken because of the FMEA. When a change is made to the circuit, the engineer runs the FMEA tool again and receives a new report. Because of the uniformity of the reports provided by the FMEA tool, it has proved possible to write software which sorts out the failure effects which have changed from the previous analysis and only report those results to the engineer. This makes examination of the repercussions of the incremental FMEA much less effort for the engineer, and makes it feasible to perform an incremental FMEA every time the design is amended.
{"title":"Effortless incremental design FMEA","authors":"C. Price","doi":"10.1109/RAMS.1996.500640","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500640","url":null,"abstract":"Design FMEA of electrical systems is a costly and labour intensive process. Ideally it would be done when the electrical system is first designed, and repeated whenever any change is made to the design. Because of the cost, this has not been possible in the past. This paper describes how an existing tool for automating electrical design failure mode and effects analysis (FMEA) can be augmented to make incremental design FMEA much less of a burden for the engineer. The tool is able to generate the effects for each failure mode and to assign significance values to the effects. The first time that it is run on a design, the engineer still has quite a lot of work to do, examining the results and deciding what actions need to be taken because of the FMEA. When a change is made to the circuit, the engineer runs the FMEA tool again and receives a new report. Because of the uniformity of the reports provided by the FMEA tool, it has proved possible to write software which sorts out the failure effects which have changed from the previous analysis and only report those results to the engineer. This makes examination of the repercussions of the incremental FMEA much less effort for the engineer, and makes it feasible to perform an incremental FMEA every time the design is amended.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124548407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500679
B. Mitchell, R. Murry
The Strategic Petroleum Reserve (SPR) is a multi-mission project required by law to maintain a prescribed degree of readiness and a mandated performance criteria. The prediction of operational availability is essential to determine operational readiness to satisfy mission requirements. This is accomplished through, the use of availability models utilizing a reliability block diagram (RED) of mission critical components. The RED model calculations incorporate sparing criteria and components using a multi-state model of the operation. Individual component data include: capacity, meantime-between-failure (MTBF), and mean-down-time (MDT) assuming repairable components and instantaneous switching. An accurate site model addresses all of these concerns and provides a good prediction of operational availability. An example of a system without a spare and the same system with a spare is presented to illustrate one method of incorporating sparing into the prediction of operational availability.
{"title":"Predicting operational availability for systems with redundant, repairable components and multiple sparing levels","authors":"B. Mitchell, R. Murry","doi":"10.1109/RAMS.1996.500679","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500679","url":null,"abstract":"The Strategic Petroleum Reserve (SPR) is a multi-mission project required by law to maintain a prescribed degree of readiness and a mandated performance criteria. The prediction of operational availability is essential to determine operational readiness to satisfy mission requirements. This is accomplished through, the use of availability models utilizing a reliability block diagram (RED) of mission critical components. The RED model calculations incorporate sparing criteria and components using a multi-state model of the operation. Individual component data include: capacity, meantime-between-failure (MTBF), and mean-down-time (MDT) assuming repairable components and instantaneous switching. An accurate site model addresses all of these concerns and provides a good prediction of operational availability. An example of a system without a spare and the same system with a spare is presented to illustrate one method of incorporating sparing into the prediction of operational availability.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124126528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500635
C. Benski
This paper is an overview of current activities within the international standards community to harmonize efforts in the dependability field. Cooperation and coordination among different national and international bodies dealing with standards is essential to achieve coherency within a horizontal standards activity such as in dependability standards. The author presents some recent results of this cooperation, in particular, the activities of the QDS (Quality, Dependability and Statistics) committee of ISO and IEC. He also shows specific instances where coordination among international standards bodies has been lacking and indicates potential dangers and pitfalls arising from this, specifically, in terms of contractual conflicts.
{"title":"Dependability standards: an international perspective","authors":"C. Benski","doi":"10.1109/RAMS.1996.500635","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500635","url":null,"abstract":"This paper is an overview of current activities within the international standards community to harmonize efforts in the dependability field. Cooperation and coordination among different national and international bodies dealing with standards is essential to achieve coherency within a horizontal standards activity such as in dependability standards. The author presents some recent results of this cooperation, in particular, the activities of the QDS (Quality, Dependability and Statistics) committee of ISO and IEC. He also shows specific instances where coordination among international standards bodies has been lacking and indicates potential dangers and pitfalls arising from this, specifically, in terms of contractual conflicts.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129383741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500678
K.E. Spezzaferro
As budgets are decreasing, it is imperative to select maintenance inspection interval lengths that minimize costs without risk of compromising safety or operational effectiveness issues. However, the data required are not always available or conducive to standard analytic techniques. This paper discusses the application of logistic regression to existing maintenance inspection data to establish inspection intervals. Logistic regression response variables are binary or go/no-go, variables which do not lend themselves to analysis with traditional methods. This paper presents the methodology along with pertinent results.
{"title":"Applying logistic regression to maintenance data to establish inspection intervals","authors":"K.E. Spezzaferro","doi":"10.1109/RAMS.1996.500678","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500678","url":null,"abstract":"As budgets are decreasing, it is imperative to select maintenance inspection interval lengths that minimize costs without risk of compromising safety or operational effectiveness issues. However, the data required are not always available or conducive to standard analytic techniques. This paper discusses the application of logistic regression to existing maintenance inspection data to establish inspection intervals. Logistic regression response variables are binary or go/no-go, variables which do not lend themselves to analysis with traditional methods. This paper presents the methodology along with pertinent results.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121421842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/RAMS.1996.500661
Sut-Mui Tang
This paper describes a new methodology for selecting effective burn-in strategies for integrated circuits (ICs) in automotive applications. The method analyzes failure mechanisms for different IC technologies and utilizes family IC data to determine appropriate burn-in conditions for new ICs. The burn-in effectiveness for metal-oxide-semiconductor (MOS) and bipolar technologies is discussed. Burn-in data is presented to demonstrate that burn-in is no longer a cost effective screening process for bipolar ICs and some MOS ICs, but it is still needed for MOS ICs with large die sizes and complex processing technologies. Data also reveals that burn-in is primarily useful for detecting wafer processing defects rather than packaging defects. To select family ICs, a method based on IC attributes is described. Practical guidelines on how to use family IC data and acceleration factors to reduce burn-in time are also explained.
{"title":"New burn-in methodology based on IC attributes, family IC burn-in data, and failure mechanism analysis","authors":"Sut-Mui Tang","doi":"10.1109/RAMS.1996.500661","DOIUrl":"https://doi.org/10.1109/RAMS.1996.500661","url":null,"abstract":"This paper describes a new methodology for selecting effective burn-in strategies for integrated circuits (ICs) in automotive applications. The method analyzes failure mechanisms for different IC technologies and utilizes family IC data to determine appropriate burn-in conditions for new ICs. The burn-in effectiveness for metal-oxide-semiconductor (MOS) and bipolar technologies is discussed. Burn-in data is presented to demonstrate that burn-in is no longer a cost effective screening process for bipolar ICs and some MOS ICs, but it is still needed for MOS ICs with large die sizes and complex processing technologies. Data also reveals that burn-in is primarily useful for detecting wafer processing defects rather than packaging defects. To select family ICs, a method based on IC attributes is described. Practical guidelines on how to use family IC data and acceleration factors to reduce burn-in time are also explained.","PeriodicalId":393833,"journal":{"name":"Proceedings of 1996 Annual Reliability and Maintainability Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120932972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}