Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653815
Taeho Kim, W. Kuo, W. Chien
Yield and reliability are two important factors affecting the profitability of semiconductor manufacturing. By using the relationship between yield and reliability, which is based on the defect reliability physics, and combining this with fault coverage, the authors develop a model to predict the gate oxide reliability of integrated circuits (ICs). This model explains well some previous experimental results performed to verify the relationship between yield and reliability and will help identify extrinsic failure mechanisms or electrical degradation caused by defects.
{"title":"A relation model of yield and reliability for the gate oxide failures","authors":"Taeho Kim, W. Kuo, W. Chien","doi":"10.1109/RAMS.1998.653815","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653815","url":null,"abstract":"Yield and reliability are two important factors affecting the profitability of semiconductor manufacturing. By using the relationship between yield and reliability, which is based on the defect reliability physics, and combining this with fault coverage, the authors develop a model to predict the gate oxide reliability of integrated circuits (ICs). This model explains well some previous experimental results performed to verify the relationship between yield and reliability and will help identify extrinsic failure mechanisms or electrical degradation caused by defects.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115881038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653669
M. Hecht, J. Handal, L. Czekalski, A. Rosin
This paper describes a model for assessing the impact of staffing on outage times and availability in the US national network of air traffic control equipment using a finite queuing model. Because of the wide geographic distribution of FAA facilities and equipment, maintenance is provided out of a national network of cost centers. Each such center has a limited number of technicians ("servers") who are responsible for providing scheduled maintenance and repair for the equipment assigned to that center. When an equipment requires service and a qualified technician is available, then the outage time is simply the repair time. However, if there are equipment failures when technicians are busy making other repairs, then there is an additional waiting time until a qualified technician is free. The model determines average outage times as a function of the number of technicians assigned to a cost center, equipment failure rates and the number of equipment which technicians must support.
{"title":"Impact of maintenance staffing on availability of the US air traffic control system","authors":"M. Hecht, J. Handal, L. Czekalski, A. Rosin","doi":"10.1109/RAMS.1998.653669","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653669","url":null,"abstract":"This paper describes a model for assessing the impact of staffing on outage times and availability in the US national network of air traffic control equipment using a finite queuing model. Because of the wide geographic distribution of FAA facilities and equipment, maintenance is provided out of a national network of cost centers. Each such center has a limited number of technicians (\"servers\") who are responsible for providing scheduled maintenance and repair for the equipment assigned to that center. When an equipment requires service and a qualified technician is available, then the outage time is simply the repair time. However, if there are equipment failures when technicians are busy making other repairs, then there is an additional waiting time until a qualified technician is free. The model determines average outage times as a function of the number of technicians assigned to a cost center, equipment failure rates and the number of equipment which technicians must support.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117269860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653719
D. Reineke, E. Pohl, W. Murdock
This paper considers the problem of estimating the survival function from a large set of sampling data subject to high levels of random censoring on the right. The system under study consists of a series arrangement of four functional subsystems. Each of the functional subsystems consists of a collection of independent components in series. The system does not have redundant components. This study aims to simulate a series arrangement of four unique components and compare the performance of the Kaplan Meier Estimator (KME), the piecewise exponential estimator (PEXE) and the maximum likelihood estimator (MLE) in estimating the survivor functions for the system as well as individual components under high levels of random censorship. Monte Carlo analysis is used to compare total time on test plots and optimal age replacement times determined using the KME and PEXE methods. This study extends the work of Klefsjo and Westberg (1994) by considering the estimation of survivor functions and optimal age replacement periods under higher levels of random censorship (up to 90%). The effect of such high censoring is that both the survivor curve and the optimal replacement time are generally, and sometimes severely, underestimated at the component level but not necessarily at the system level. Further studies will examine the trade-offs in using system level vs. component level data to make maintenance decisions for highly censored samples.
{"title":"Survival analysis and maintenance policies for a series system, with highly censored data","authors":"D. Reineke, E. Pohl, W. Murdock","doi":"10.1109/RAMS.1998.653719","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653719","url":null,"abstract":"This paper considers the problem of estimating the survival function from a large set of sampling data subject to high levels of random censoring on the right. The system under study consists of a series arrangement of four functional subsystems. Each of the functional subsystems consists of a collection of independent components in series. The system does not have redundant components. This study aims to simulate a series arrangement of four unique components and compare the performance of the Kaplan Meier Estimator (KME), the piecewise exponential estimator (PEXE) and the maximum likelihood estimator (MLE) in estimating the survivor functions for the system as well as individual components under high levels of random censorship. Monte Carlo analysis is used to compare total time on test plots and optimal age replacement times determined using the KME and PEXE methods. This study extends the work of Klefsjo and Westberg (1994) by considering the estimation of survivor functions and optimal age replacement periods under higher levels of random censorship (up to 90%). The effect of such high censoring is that both the survivor curve and the optimal replacement time are generally, and sometimes severely, underestimated at the component level but not necessarily at the system level. Further studies will examine the trade-offs in using system level vs. component level data to make maintenance decisions for highly censored samples.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653812
M.T. Kowal, A. Dey, R. Tryon
This paper presents a rational design approach known as the integrated design method (IDM). By employing cost relationships within a probabilistic methodology, as is done in IDM, engineers have a new tool to objectively assess product cost, performance and reliability. The benefits of the method are: lower product cost; superior product performance; reduced product development time; and reduced product warranty and liability costs. To harness the full potential of IDM requires that users have an easy to use tool that is capable of performing all required analysis quickly, accurately, and with minimal user interaction. ProFORM is computational software, which focuses on integrating all the elements of the probabilistic analysis into a comprehensive package that is easily used and understood. Unlike other packages, ProFORM is integrated with the existing design tools in a seamless fashion. The numerical example at the end of the paper clearly demonstrates the advantages of IDM over the conventional design methods.
{"title":"Integrated design method for probabilistic design","authors":"M.T. Kowal, A. Dey, R. Tryon","doi":"10.1109/RAMS.1998.653812","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653812","url":null,"abstract":"This paper presents a rational design approach known as the integrated design method (IDM). By employing cost relationships within a probabilistic methodology, as is done in IDM, engineers have a new tool to objectively assess product cost, performance and reliability. The benefits of the method are: lower product cost; superior product performance; reduced product development time; and reduced product warranty and liability costs. To harness the full potential of IDM requires that users have an easy to use tool that is capable of performing all required analysis quickly, accurately, and with minimal user interaction. ProFORM is computational software, which focuses on integrating all the elements of the probabilistic analysis into a comprehensive package that is easily used and understood. Unlike other packages, ProFORM is integrated with the existing design tools in a seamless fashion. The numerical example at the end of the paper clearly demonstrates the advantages of IDM over the conventional design methods.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114197885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653553
G. Epstein
The need to reduce production costs through process streamlining is paramount in today's highly competitive environment. For ESS, this means that it must be justified in terms of the overall value it provides to the customer. Therefore, this paper addresses the problem of designing an ESS test strategy that is effective, yet makes efficient uses of test resources. The experiences described in this paper show that it is possible to streamline an ESS process, and still preserve its effectiveness. In this case study, the resulting strategy incorporated a two-level ESS program. It integrates the ESS testing of spare modules with the ESS testing of deliverable LRUs (line replaceable units). The new strategy also accounts for the need to tailor environmental stress and functional testing to the operational characteristics of the equipment. The result of these changes is a streamlined protocol that shortened ESS in process time, and reduced the demand on costly test resources.
{"title":"Tailoring ESS strategies for effectiveness and efficiency","authors":"G. Epstein","doi":"10.1109/RAMS.1998.653553","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653553","url":null,"abstract":"The need to reduce production costs through process streamlining is paramount in today's highly competitive environment. For ESS, this means that it must be justified in terms of the overall value it provides to the customer. Therefore, this paper addresses the problem of designing an ESS test strategy that is effective, yet makes efficient uses of test resources. The experiences described in this paper show that it is possible to streamline an ESS process, and still preserve its effectiveness. In this case study, the resulting strategy incorporated a two-level ESS program. It integrates the ESS testing of spare modules with the ESS testing of deliverable LRUs (line replaceable units). The new strategy also accounts for the need to tailor environmental stress and functional testing to the operational characteristics of the equipment. The result of these changes is a streamlined protocol that shortened ESS in process time, and reduced the demand on costly test resources.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114639260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653806
L. Klyatis
Strategical and tactical bases are developed for conditions for environmental accelerated testing of a product using physical simulation of life processes. These conditions permit rapid attainment of accurate information for reliability evaluation and prediction, technological development, cost effectiveness and competitive marketing of the product, etc.
{"title":"Conditions of environmental accelerated testing","authors":"L. Klyatis","doi":"10.1109/RAMS.1998.653806","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653806","url":null,"abstract":"Strategical and tactical bases are developed for conditions for environmental accelerated testing of a product using physical simulation of life processes. These conditions permit rapid attainment of accurate information for reliability evaluation and prediction, technological development, cost effectiveness and competitive marketing of the product, etc.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124877283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653596
S. Virtanen
This paper presents a systematic approach to specifying dependability requirements for a product and system. Vague requirements and specifications cause the most waste in the design process and lengthen the product design time. With the method presented here, quantitative requirements of dependability that guide product design and development work can be specified. The specification is carried out based on customer requirements and the design team's knowledge. The method can be applied both to tailor-made business-to-business products and to ordinary consumer goods. With the method, dependability requirements can be allocated to the functions, systems, mechanisms and parts as to the design work proceeds and design concepts are known. Allocation is based on the object's technical complexity and importance as set by the customers. With the method, the effect of dependability requirements set by the customer to the known technical solution of a product can be demonstrated. This connection is important in order to avoid promising something that can not be achieved or its achievement will come too expensive.
{"title":"Reliability in product design-specification of dependability requirements","authors":"S. Virtanen","doi":"10.1109/RAMS.1998.653596","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653596","url":null,"abstract":"This paper presents a systematic approach to specifying dependability requirements for a product and system. Vague requirements and specifications cause the most waste in the design process and lengthen the product design time. With the method presented here, quantitative requirements of dependability that guide product design and development work can be specified. The specification is carried out based on customer requirements and the design team's knowledge. The method can be applied both to tailor-made business-to-business products and to ordinary consumer goods. With the method, dependability requirements can be allocated to the functions, systems, mechanisms and parts as to the design work proceeds and design concepts are known. Allocation is based on the object's technical complexity and importance as set by the customers. With the method, the effect of dependability requirements set by the customer to the known technical solution of a product can be demonstrated. This connection is important in order to avoid promising something that can not be achieved or its achievement will come too expensive.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653807
W. Willing, A. Helland
Electronic systems are being designed with increasing levels of digital logic integration, quite often in the form of digital application specific integrated circuits (ASICs). The level of integration in these devices (10000 to greater than 100000 primitive logic elements such as "gates" and/or flip flops) presents a difficult challenge to design engineers for the development of a comprehensive set of test vectors to verify that all of the elements within the ASIC operate correctly. The percentage of possible logic elements (gates, flip flops, etc.) tested by the test vectors is known as fault coverage (FC). Although 100% fault coverage is a desired goal, quite often the complexity of the ASICs preclude reaching that goal. The hazards of insufficient fault coverage are magnified in complex systems with many ASICs, for if an untested defective logic element were to be exercised in any one ASIC, a system failure would occur. This paper presents a mathematical model to develop digital ASIC fault coverage guidelines for complex electronic systems. The model is based on established probabilistic relationships between integrated circuit fabrication yields, fault coverage and the resulting device defect level, combined with an estimated probability that untested logic elements will be exercised in use. The results of this model can be used to allocate the ASIC fault coverage requirements necessary to achieve high system mission success rates.
{"title":"Establishing ASIC fault-coverage guidelines for high-reliability systems","authors":"W. Willing, A. Helland","doi":"10.1109/RAMS.1998.653807","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653807","url":null,"abstract":"Electronic systems are being designed with increasing levels of digital logic integration, quite often in the form of digital application specific integrated circuits (ASICs). The level of integration in these devices (10000 to greater than 100000 primitive logic elements such as \"gates\" and/or flip flops) presents a difficult challenge to design engineers for the development of a comprehensive set of test vectors to verify that all of the elements within the ASIC operate correctly. The percentage of possible logic elements (gates, flip flops, etc.) tested by the test vectors is known as fault coverage (FC). Although 100% fault coverage is a desired goal, quite often the complexity of the ASICs preclude reaching that goal. The hazards of insufficient fault coverage are magnified in complex systems with many ASICs, for if an untested defective logic element were to be exercised in any one ASIC, a system failure would occur. This paper presents a mathematical model to develop digital ASIC fault coverage guidelines for complex electronic systems. The model is based on established probabilistic relationships between integrated circuit fabrication yields, fault coverage and the resulting device defect level, combined with an estimated probability that untested logic elements will be exercised in use. The results of this model can be used to allocate the ASIC fault coverage requirements necessary to achieve high system mission success rates.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115385854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653795
E. Collins, E. Dougherty, J. Fragola
Operating experience, as captured in maintenance and repair records of a facility, provides a directly applicable source of equipment reliability data for reliability and risk analysis quantification. However, this experience may not have sufficient breadth or depth to meet the data needs of die analysis. Therefore, even the best direct experience-based data set should be complemented by generic information, if only to provide a comparison with similar equipment experience in other settings, environments or industries. Generic data use involves some cure in matching die generic component types and applications to the facility equipment types and environments, particularly if the facility in question is rather unique. The bottom line, however, is that most often a combination of plant-specific and generic data are required to fulfil a risk and reliability study parametric needs. While judgment cannot be removed altogether from the process of deciding which data is most appropriate to use, based on experience there are factors to consider which can be structured into a set of guidelines. This paper therefore provides such guidance for the comparison between generic and facility-specific data and for the selection of which data or combination best meets the study data needs.
{"title":"Decision-making guidelines for the use of experience and generic data","authors":"E. Collins, E. Dougherty, J. Fragola","doi":"10.1109/RAMS.1998.653795","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653795","url":null,"abstract":"Operating experience, as captured in maintenance and repair records of a facility, provides a directly applicable source of equipment reliability data for reliability and risk analysis quantification. However, this experience may not have sufficient breadth or depth to meet the data needs of die analysis. Therefore, even the best direct experience-based data set should be complemented by generic information, if only to provide a comparison with similar equipment experience in other settings, environments or industries. Generic data use involves some cure in matching die generic component types and applications to the facility equipment types and environments, particularly if the facility in question is rather unique. The bottom line, however, is that most often a combination of plant-specific and generic data are required to fulfil a risk and reliability study parametric needs. While judgment cannot be removed altogether from the process of deciding which data is most appropriate to use, based on experience there are factors to consider which can be structured into a set of guidelines. This paper therefore provides such guidance for the comparison between generic and facility-specific data and for the selection of which data or combination best meets the study data needs.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124797791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653794
M. Boyd, A. Abou-Khalil, T. A. Montgomery, M. Gebrael
This paper describes the application in an industrial domain (commercial automotive design/maintenance) of a technique being developed at NASA Ames Research Center for building automated diagnostic tools for embedded (i.e., combined hardware/software) systems. The technique involves integrating a "real-time" sensor-information-monitoring computer process together with a static knowledge base (KB) that contains specific information about a system's architecture, its nominal behavior, and its behavior in the presence of failures and/or anomalies. The monitoring program samples status information from the system under test. The KB is then consulted by an inference engine (IE) component of the monitoring program which, based on the system's sampled status information and the system's architectural and behavioral information contained in the KB, diagnoses the potential cause(s) of any observed anomalous symptoms indicated in the system. The automated diagnosis technique described is being developed at NASA Ames Research Center for for use aboard NASA's new Stratospheric Observatory For Infrared Astronomy (SOFIA) airborne astronomy observatory. This paper demonstrates that the same technology (FMECA-based derivation of a diagnostic KB, automated computer-assisted diagnosis of complex failure situations, and computer-based repair advisory to reduce repair-time and personal-expertise requirements of repair technicians) is also applicable for industrial applications which need to reduce cost and improve service to customers. We conclude with a summary of plans for future work.
{"title":"Development of automated computer-aided diagnostic systems using FMECA-based knowledge capture methods","authors":"M. Boyd, A. Abou-Khalil, T. A. Montgomery, M. Gebrael","doi":"10.1109/RAMS.1998.653794","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653794","url":null,"abstract":"This paper describes the application in an industrial domain (commercial automotive design/maintenance) of a technique being developed at NASA Ames Research Center for building automated diagnostic tools for embedded (i.e., combined hardware/software) systems. The technique involves integrating a \"real-time\" sensor-information-monitoring computer process together with a static knowledge base (KB) that contains specific information about a system's architecture, its nominal behavior, and its behavior in the presence of failures and/or anomalies. The monitoring program samples status information from the system under test. The KB is then consulted by an inference engine (IE) component of the monitoring program which, based on the system's sampled status information and the system's architectural and behavioral information contained in the KB, diagnoses the potential cause(s) of any observed anomalous symptoms indicated in the system. The automated diagnosis technique described is being developed at NASA Ames Research Center for for use aboard NASA's new Stratospheric Observatory For Infrared Astronomy (SOFIA) airborne astronomy observatory. This paper demonstrates that the same technology (FMECA-based derivation of a diagnostic KB, automated computer-assisted diagnosis of complex failure situations, and computer-based repair advisory to reduce repair-time and personal-expertise requirements of repair technicians) is also applicable for industrial applications which need to reduce cost and improve service to customers. We conclude with a summary of plans for future work.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126608314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}