Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653690
K. Abshire, M. Barron
The creation of a virtual maintenance capability within LMTAS (Lockheed Martin Tactical Aircraft System) for the F-16 programme has led supportability engineering into the world of virtual reality. Achievements in applying this technology are described. Insight is provided into the challenges met and benefits derived as a result of applying this emerging technology to real world requirements.
{"title":"Virtual maintenance real-world applications within virtual environments","authors":"K. Abshire, M. Barron","doi":"10.1109/RAMS.1998.653690","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653690","url":null,"abstract":"The creation of a virtual maintenance capability within LMTAS (Lockheed Martin Tactical Aircraft System) for the F-16 programme has led supportability engineering into the world of virtual reality. Achievements in applying this technology are described. Insight is provided into the challenges met and benefits derived as a result of applying this emerging technology to real world requirements.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"32 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114135602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653809
H. Caruso, A. Dasgupta
Accelerated testing is often promoted as a solution to saving test time and costs. However, if ignorance about the true significance of accelerated test models prevails, then these tests could result in penalties to cost-effective product development efforts rather than the hoped-for reductions. Using physics of failure models, this paper emphasizes that: there are no "magic" analytical models that simply, conveniently, and accurately estimate the life of complex manufactured assemblies and products; each analytical model describes physical change mechanisms associated with specific materials when subjected to particular environmental loading conditions; because product assemblies consist of many different materials and structural configurations, a product's wearout behavior must be evaluated in terms of several different, sometimes competing, physical change models; in real-life and in accelerated testing, different elements of a product will age or fatigue at different rates, depending on what they are made of, how they are used, and what environmental loading conditions prevail at the site of each element; accelerated testing is assumed to provide leverage for increasing the rate at which knowledge is gathered about a product as well as saving test time and costs. However, accelerated testing can also magnify the negative effects of invalid assumptions and poorly defined boundary conditions; and successful accelerated testing relies on ensuring that all parties involved have reasonable expectations of what this product development tool can and cannot do just as much as on good laboratory procedures.
{"title":"A fundamental overview of accelerated-testing analytic models","authors":"H. Caruso, A. Dasgupta","doi":"10.1109/RAMS.1998.653809","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653809","url":null,"abstract":"Accelerated testing is often promoted as a solution to saving test time and costs. However, if ignorance about the true significance of accelerated test models prevails, then these tests could result in penalties to cost-effective product development efforts rather than the hoped-for reductions. Using physics of failure models, this paper emphasizes that: there are no \"magic\" analytical models that simply, conveniently, and accurately estimate the life of complex manufactured assemblies and products; each analytical model describes physical change mechanisms associated with specific materials when subjected to particular environmental loading conditions; because product assemblies consist of many different materials and structural configurations, a product's wearout behavior must be evaluated in terms of several different, sometimes competing, physical change models; in real-life and in accelerated testing, different elements of a product will age or fatigue at different rates, depending on what they are made of, how they are used, and what environmental loading conditions prevail at the site of each element; accelerated testing is assumed to provide leverage for increasing the rate at which knowledge is gathered about a product as well as saving test time and costs. However, accelerated testing can also magnify the negative effects of invalid assumptions and poorly defined boundary conditions; and successful accelerated testing relies on ensuring that all parties involved have reasonable expectations of what this product development tool can and cannot do just as much as on good laboratory procedures.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"285 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123728350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653797
D. Kececioglu, Mingxiao Jiang, Fengbin Sun
The random fatigue damage accumulation under stationary random loading, narrow-band or wide-band, has been quantified by numerous authors. It is shown in this paper that their results can be applied to random fatigue crack growth under random loading. The random fatigue life distribution is obtained, which is found to be a Birnbaum-Saunder's distribution. Then, the fatigue life statistics and the associated reliability are quantified.
{"title":"A unified approach to random-fatigue reliability quantification under random loading","authors":"D. Kececioglu, Mingxiao Jiang, Fengbin Sun","doi":"10.1109/RAMS.1998.653797","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653797","url":null,"abstract":"The random fatigue damage accumulation under stationary random loading, narrow-band or wide-band, has been quantified by numerous authors. It is shown in this paper that their results can be applied to random fatigue crack growth under random loading. The random fatigue life distribution is obtained, which is found to be a Birnbaum-Saunder's distribution. Then, the fatigue life statistics and the associated reliability are quantified.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126813264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653561
J. Bowles
The new SAE FAMECA standard goes a long way toward bringing FMECA (failure mode effects and criticality analysis) into line with modern design practices. This is accomplished through three major changes. (1) The new standard describes the FMECA procedure as a process to be used throughout the product development cycle, rather than as a task to be done after the design is complete. It emphasizes the role of functional and interface FMECAs as well as that of the traditional piece part FMECA. (2) The concept of "failure mode equivalence" enables failure modes that have equivalent effects to be analyzed together and reduces much of the duplicative work generated by traditional component-by-component fault analyses. This concept allows the analyses of functional failure modes done early in the design process to be carried over to the effects of interface and piece-part failure modes analyzed later in the design. (3) Criticality is assessed using a Pareto ranking procedure based on the probability and the severity of the failure mode. This is more broadly applicable than the use of criticality numbers as defined in Mil-Std-1629 and it avoids some of the mathematical difficulties of the RPN analysis used in the Automobile FMECA standard, SAE J1739.
{"title":"The new SAE FMECA standard","authors":"J. Bowles","doi":"10.1109/RAMS.1998.653561","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653561","url":null,"abstract":"The new SAE FAMECA standard goes a long way toward bringing FMECA (failure mode effects and criticality analysis) into line with modern design practices. This is accomplished through three major changes. (1) The new standard describes the FMECA procedure as a process to be used throughout the product development cycle, rather than as a task to be done after the design is complete. It emphasizes the role of functional and interface FMECAs as well as that of the traditional piece part FMECA. (2) The concept of \"failure mode equivalence\" enables failure modes that have equivalent effects to be analyzed together and reduces much of the duplicative work generated by traditional component-by-component fault analyses. This concept allows the analyses of functional failure modes done early in the design process to be carried over to the effects of interface and piece-part failure modes analyzed later in the design. (3) Criticality is assessed using a Pareto ranking procedure based on the probability and the severity of the failure mode. This is more broadly applicable than the use of criticality numbers as defined in Mil-Std-1629 and it avoids some of the mathematical difficulties of the RPN analysis used in the Automobile FMECA standard, SAE J1739.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122218966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653796
C. Vassiliadis, E. N. Pistikopoulos
While the impact of uncertainty in process parameters (such as product demands and prices, physical properties, etc.) on the design and operation of a chemical process has been widely recognised and studied over the last fifteen years, relatively little progress has been made in studying the influence of process uncertainty on maintenance scheduling and optimisation. This paper presents a novel optimisation formulation and a decomposition solution strategy for addressing the problem of maintenance scheduling under uncertainty in the context of chemical process design and operation. The proposed formulation features an expected profit objective function which takes into account the transitions between the different states of the chemical system, due to equipment failure; corrective and preventive maintenance policies are explicitly considered. The resulting model corresponds to a large scale mixed-integer nonlinear optimal control problem. By exploiting reliability properties, an effective two-step decomposition solution procedure is then proposed, which as illustrated with a process example problem, depicts an optimal preventive maintenance policy in the presence of process uncertainty: the time instants of required maintenance actions over a time horizon and the optimal sequence of components on which preventive maintenance is to be performed.
{"title":"On the interactions of chemical-process design under uncertainty and maintenance-optimisation","authors":"C. Vassiliadis, E. N. Pistikopoulos","doi":"10.1109/RAMS.1998.653796","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653796","url":null,"abstract":"While the impact of uncertainty in process parameters (such as product demands and prices, physical properties, etc.) on the design and operation of a chemical process has been widely recognised and studied over the last fifteen years, relatively little progress has been made in studying the influence of process uncertainty on maintenance scheduling and optimisation. This paper presents a novel optimisation formulation and a decomposition solution strategy for addressing the problem of maintenance scheduling under uncertainty in the context of chemical process design and operation. The proposed formulation features an expected profit objective function which takes into account the transitions between the different states of the chemical system, due to equipment failure; corrective and preventive maintenance policies are explicitly considered. The resulting model corresponds to a large scale mixed-integer nonlinear optimal control problem. By exploiting reliability properties, an effective two-step decomposition solution procedure is then proposed, which as illustrated with a process example problem, depicts an optimal preventive maintenance policy in the presence of process uncertainty: the time instants of required maintenance actions over a time horizon and the optimal sequence of components on which preventive maintenance is to be performed.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122771406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653574
J. Sincell, R. Perez, P. Noone, D. Oberhettinger
Redundancy verification analysis (RVA) is a promising technique for verifying internal redundancy within electronic assemblies, as well as "cross-strap" redundancy between them, in cost or schedule constrained spacecraft development projects. RVA tracks a signal from its source to the end of the signal path, through all the subsystems along the way, including software. When performed in conjunction with a worst case analysis (WCA), RVA may obviate the need for a system-level failure mode and effects analysis (FMEA), providing a detailed examination of the actual workings of system hardware, software, and interfaces. Demonstrated by JPL on the Mars Global Surveyor project, use of RVA is consistent with NASA's emphasis on "faster-better-cheaper" spacecraft design and development.
{"title":"Redundancy verification analysis-an alternative to FMEA for low cost missions","authors":"J. Sincell, R. Perez, P. Noone, D. Oberhettinger","doi":"10.1109/RAMS.1998.653574","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653574","url":null,"abstract":"Redundancy verification analysis (RVA) is a promising technique for verifying internal redundancy within electronic assemblies, as well as \"cross-strap\" redundancy between them, in cost or schedule constrained spacecraft development projects. RVA tracks a signal from its source to the end of the signal path, through all the subsystems along the way, including software. When performed in conjunction with a worst case analysis (WCA), RVA may obviate the need for a system-level failure mode and effects analysis (FMEA), providing a detailed examination of the actual workings of system hardware, software, and interfaces. Demonstrated by JPL on the Mars Global Surveyor project, use of RVA is consistent with NASA's emphasis on \"faster-better-cheaper\" spacecraft design and development.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125066086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653636
K. Majeske, G. Herrin
Automobile manufacturers continually modify product designs to reduce costs and improve customer satisfaction through better field performance. Managers prioritize and approve proposed changes using a variety of methods that generally include some type of cost benefit analysis. An implied assumption of the cost justification process is that reliability bench tests provide an accurate prediction of product field performance. This research suggests that manufacturers use historical data to verify that a correlation exists between the bench test and actual field (warranty) performance. Further, manufacturers should analyze design changes using post hoc tests on observed field failure (warranty claim) data. This analysis can assist the manufacturer in determining the actual financial consequences of the design change. We demonstrate the recommended analysis techniques using manufacturer provided automobile warranty data.
{"title":"Determining warranty benefits for automobile design changes","authors":"K. Majeske, G. Herrin","doi":"10.1109/RAMS.1998.653636","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653636","url":null,"abstract":"Automobile manufacturers continually modify product designs to reduce costs and improve customer satisfaction through better field performance. Managers prioritize and approve proposed changes using a variety of methods that generally include some type of cost benefit analysis. An implied assumption of the cost justification process is that reliability bench tests provide an accurate prediction of product field performance. This research suggests that manufacturers use historical data to verify that a correlation exists between the bench test and actual field (warranty) performance. Further, manufacturers should analyze design changes using post hoc tests on observed field failure (warranty claim) data. This analysis can assist the manufacturer in determining the actual financial consequences of the design change. We demonstrate the recommended analysis techniques using manufacturer provided automobile warranty data.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124504276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653749
M. Hecht
Defensible quantitative assessments of the reliability and availability of computer systems including software are possible. This paper characterizes the need for quantitative empirically-based dependability assessment, describes some of the previous work in this area and identifies problems. While there is still ongoing research in measurement-based analysis of computer dependability, the techniques developed in this area have achieved significant experimental results. Measurement-based analysis can also provide verification of assumptions and parameters used in the design models. The results are useful for designing and maintaining highly-dependable computer systems intended for use in safety critical applications.
{"title":"The need for measurement based reliability evaluation","authors":"M. Hecht","doi":"10.1109/RAMS.1998.653749","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653749","url":null,"abstract":"Defensible quantitative assessments of the reliability and availability of computer systems including software are possible. This paper characterizes the need for quantitative empirically-based dependability assessment, describes some of the previous work in this area and identifies problems. While there is still ongoing research in measurement-based analysis of computer dependability, the techniques developed in this area have achieved significant experimental results. Measurement-based analysis can also provide verification of assumptions and parameters used in the design models. The results are useful for designing and maintaining highly-dependable computer systems intended for use in safety critical applications.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124215133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653641
B. Joshi, S. Hosseini
In this paper, the authors present simple yet efficient fault detection algorithms for microprocessor systems. They propose test generation algorithms to generate test sequences. These test sequences are used by the proposed testing algorithms. The test generation algorithms are divided into two classes. The data processing unit test generator generates tests for every functional block in the ALU while the control unit test generator generates tests for fault detection in instruction and register decoding, buses, and registers. The authors show that the major advantage of the test generation algorithm for the data processing unit is that it ignores the implementation details and thus it can be used for a wide spectrum of technologies. They also show analytically that the running time of the control unit test algorithm is in O(n) where n is the number of instructions. The simulation techniques used and the experimental results obtained are presented. The concept of functionality tests has been strictly maintained. The simulation results suggest that the technique is independent of the implementation. This technique can be easily applied to larger multiprocessor systems where each processor can perform quick yet efficient tests on a subset of the microprocessors.
{"title":"Efficient algorithms for microprocessor testing","authors":"B. Joshi, S. Hosseini","doi":"10.1109/RAMS.1998.653641","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653641","url":null,"abstract":"In this paper, the authors present simple yet efficient fault detection algorithms for microprocessor systems. They propose test generation algorithms to generate test sequences. These test sequences are used by the proposed testing algorithms. The test generation algorithms are divided into two classes. The data processing unit test generator generates tests for every functional block in the ALU while the control unit test generator generates tests for fault detection in instruction and register decoding, buses, and registers. The authors show that the major advantage of the test generation algorithm for the data processing unit is that it ignores the implementation details and thus it can be used for a wide spectrum of technologies. They also show analytically that the running time of the control unit test algorithm is in O(n) where n is the number of instructions. The simulation techniques used and the experimental results obtained are presented. The concept of functionality tests has been strictly maintained. The simulation results suggest that the technique is independent of the implementation. This technique can be easily applied to larger multiprocessor systems where each processor can perform quick yet efficient tests on a subset of the microprocessors.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126436752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-19DOI: 10.1109/RAMS.1998.653747
D. D. Bell, S. Keene
Summary form only given. Software reliability has received a great deal of attention since the 1980s. How should the reliability analyst deal with it? Over this period many approaches to determine software reliability have been advocated by some of the most reputable people in the industry. The authors anticipate that there will be many more advocated in the future. It is a difficult topic to address in the context of a project with increasing attention to, and constraints put on, schedules and budgets, This paper attempts to provide the reliability engineer with the answers to the software reliability question.
{"title":"Software reliability: a continuing dilemma","authors":"D. D. Bell, S. Keene","doi":"10.1109/RAMS.1998.653747","DOIUrl":"https://doi.org/10.1109/RAMS.1998.653747","url":null,"abstract":"Summary form only given. Software reliability has received a great deal of attention since the 1980s. How should the reliability analyst deal with it? Over this period many approaches to determine software reliability have been advocated by some of the most reputable people in the industry. The authors anticipate that there will be many more advocated in the future. It is a difficult topic to address in the context of a project with increasing attention to, and constraints put on, schedules and budgets, This paper attempts to provide the reliability engineer with the answers to the software reliability question.","PeriodicalId":275301,"journal":{"name":"Annual Reliability and Maintainability Symposium. 1998 Proceedings. International Symposium on Product Quality and Integrity","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131765877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}