Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816293
W. Hou, O. Okogbaa
In this paper, an approach of analyzing the reliability for an integrated network with unreliable nodes and software failure is developed. An example is given in which the software failure is depicted by the Jelinski Moranda De-Eutrophication Model, and the failures of hardware and link follow the Poisson process. The impact of failure and time on software utilization is explored.
{"title":"Reliability analysis for integrated networks with unreliable nodes and software failures in the time domain","authors":"W. Hou, O. Okogbaa","doi":"10.1109/RAMS.2000.816293","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816293","url":null,"abstract":"In this paper, an approach of analyzing the reliability for an integrated network with unreliable nodes and software failure is developed. An example is given in which the software failure is depicted by the Jelinski Moranda De-Eutrophication Model, and the failures of hardware and link follow the Poisson process. The impact of failure and time on software utilization is explored.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132094667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816281
S. Nassar, R. Barnett
The application of reliability and quality techniques, supported by a sound quality management system, have resulted in dramatic improvements in manufacturing yields, as well as other internal and external quality metrics for IBM's Personal Systems Group (PSG) products. These improvements, product performance and higher customer satisfaction, were realized while reducing IBM costs. This accomplishment is critical in the highly competitive personal computer marketplace, and demonstrates IBM's commitment to excellence. Using this solid management system, these goals can be obtained in a high volume highly complex manufacturing process as well as a low volume low complexity process, as PSG manufactures systems in both environments. In summary, spectacular improvements have been achieved at a worldwide level, across all the PSG product brands. This has been achieved by excellent teamwork and "attention to detail" by PSG engineers worldwide. The extended team intends to enhance this activity during 1999 and beyond in order to drive performance to the next level and deliver further benefits to IBM and their customers.
{"title":"IBM Personal Systems Group. Applications and results of reliability and quality programs","authors":"S. Nassar, R. Barnett","doi":"10.1109/RAMS.2000.816281","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816281","url":null,"abstract":"The application of reliability and quality techniques, supported by a sound quality management system, have resulted in dramatic improvements in manufacturing yields, as well as other internal and external quality metrics for IBM's Personal Systems Group (PSG) products. These improvements, product performance and higher customer satisfaction, were realized while reducing IBM costs. This accomplishment is critical in the highly competitive personal computer marketplace, and demonstrates IBM's commitment to excellence. Using this solid management system, these goals can be obtained in a high volume highly complex manufacturing process as well as a low volume low complexity process, as PSG manufactures systems in both environments. In summary, spectacular improvements have been achieved at a worldwide level, across all the PSG product brands. This has been achieved by excellent teamwork and \"attention to detail\" by PSG engineers worldwide. The extended team intends to enhance this activity during 1999 and beyond in order to drive performance to the next level and deliver further benefits to IBM and their customers.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131354920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816308
J. Eyman, A. A. Boyd, R. Jones, W. Vantine, S. Smith, J. Newman
Project risk management has been recognised for some time as a formal discipline in its own right, and there is growing consensus on the elements which comprise best practice. However the project risk management field has not fully matured and there are a number of areas requiring further development. This paper presents the author’s perceptions on the directions in which project risk management might develop in the short to medium term, comprising five key areas. These are : organisational bench-marking using maturity model concepts; integration of risk management with overall project management and corporate culture; increased depth of analysis and breadth of application; inclusion of behavioural aspects in the risk process; and development of a body of evidence to justify and support use of risk management.
{"title":"Risk management in the new millennium","authors":"J. Eyman, A. A. Boyd, R. Jones, W. Vantine, S. Smith, J. Newman","doi":"10.1109/RAMS.2000.816308","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816308","url":null,"abstract":"Project risk management has been recognised for some time as a formal discipline in its own right, and there is growing consensus on the elements which comprise best practice. However the project risk management field has not fully matured and there are a number of areas requiring further development. This paper presents the author’s perceptions on the directions in which project risk management might develop in the short to medium term, comprising five key areas. These are : organisational bench-marking using maturity model concepts; integration of risk management with overall project management and corporate culture; increased depth of analysis and breadth of application; inclusion of behavioural aspects in the risk process; and development of a body of evidence to justify and support use of risk management.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125872102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816294
P. L. Goddard
Assessing the safety characteristics of software driven safety critical systems is problematic. The author has performed software FMEA on embedded automotive platforms for brakes, throttle, and steering with promising results. Use of software FMEA at a system and a detailed level has allowed visibility of software and hardware architectural approaches which assure safety of operation while minimizing the cost of safety critical embedded processor designs. Software FMEA has been referred to in the technical literature for more than fifteen years. Additionally, software FMEA has been recommended for evaluating critical systems in some standards, notably draft IEC 61508. Software FMEA is also provided for in the current drafts of SAE ARP 5580. However, techniques for applying software FMEA to systems during their design have been largely missing from the literature. Software FMEA has been applied to the assessment of safety critical real-time control systems embedded in military and automotive products. The paper is a follow on to and provides significant expansion to the software FMEA techniques originally described by the author in the 1993 RAMS paper "Validating The Safety Of Real-Time Control Systems Using FMEA".
{"title":"Software FMEA techniques","authors":"P. L. Goddard","doi":"10.1109/RAMS.2000.816294","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816294","url":null,"abstract":"Assessing the safety characteristics of software driven safety critical systems is problematic. The author has performed software FMEA on embedded automotive platforms for brakes, throttle, and steering with promising results. Use of software FMEA at a system and a detailed level has allowed visibility of software and hardware architectural approaches which assure safety of operation while minimizing the cost of safety critical embedded processor designs. Software FMEA has been referred to in the technical literature for more than fifteen years. Additionally, software FMEA has been recommended for evaluating critical systems in some standards, notably draft IEC 61508. Software FMEA is also provided for in the current drafts of SAE ARP 5580. However, techniques for applying software FMEA to systems during their design have been largely missing from the literature. Software FMEA has been applied to the assessment of safety critical real-time control systems embedded in military and automotive products. The paper is a follow on to and provides significant expansion to the software FMEA techniques originally described by the author in the 1993 RAMS paper \"Validating The Safety Of Real-Time Control Systems Using FMEA\".","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128872455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816275
M. Krasich
Traditional failure mode and effects analysis is applied as a bottom-up analytical technique to identify component failure modes and their causes and effects on the system performance, estimate their likelihood, severity and criticality or priority for mitigation. Failure modes and their causes, other than those associated with hardware, primarily electronic, remained poorly addressed or not addressed at all. Likelihood of occurrence was determined on the basis of component failure rates or by applying engineering judgement in their estimation. Resultant prioritization is consequently difficult so that only the apparent safety-related or highly critical issues were addressed. When thoroughly done, traditional FMEA or FMECA were too involved to be used as a effective tool for reliability improvement of the product design. Fault tree analysis applied to the product as a top down in view of its functionality, failure definition, architecture and stress and operational profiles provides a methodical way of following products functional flow down to the low level assemblies, components, failure modes and respective causes and their combination. Flexibility of modeling of various functional conditions and interaction such as enabling events, events with specific priority of occurrence, etc., using FTA, provides for accurate representation of their functionality interdependence. In addition to being capable of accounting for mixed reliability attributes (failure rates mixed with failure probabilities), fault trees are easy to construct and change for quick tradeoffs as roll up of unreliability values is automatic for instant evaluation of the final quantitative reliability results. Failure mode analysis using fault tree technique that is described in this paper allows for real, in-depth engineering evaluation of each individual cause of a failure mode regarding software and hardware components, their functions, stresses, operability and interactions.
{"title":"Use of fault tree analysis for evaluation of system-reliability improvements in design phase","authors":"M. Krasich","doi":"10.1109/RAMS.2000.816275","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816275","url":null,"abstract":"Traditional failure mode and effects analysis is applied as a bottom-up analytical technique to identify component failure modes and their causes and effects on the system performance, estimate their likelihood, severity and criticality or priority for mitigation. Failure modes and their causes, other than those associated with hardware, primarily electronic, remained poorly addressed or not addressed at all. Likelihood of occurrence was determined on the basis of component failure rates or by applying engineering judgement in their estimation. Resultant prioritization is consequently difficult so that only the apparent safety-related or highly critical issues were addressed. When thoroughly done, traditional FMEA or FMECA were too involved to be used as a effective tool for reliability improvement of the product design. Fault tree analysis applied to the product as a top down in view of its functionality, failure definition, architecture and stress and operational profiles provides a methodical way of following products functional flow down to the low level assemblies, components, failure modes and respective causes and their combination. Flexibility of modeling of various functional conditions and interaction such as enabling events, events with specific priority of occurrence, etc., using FTA, provides for accurate representation of their functionality interdependence. In addition to being capable of accounting for mixed reliability attributes (failure rates mixed with failure probabilities), fault trees are easy to construct and change for quick tradeoffs as roll up of unreliability values is automatic for instant evaluation of the final quantitative reliability results. Failure mode analysis using fault tree technique that is described in this paper allows for real, in-depth engineering evaluation of each individual cause of a failure mode regarding software and hardware components, their functions, stresses, operability and interactions.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"28 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113986106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816319
Jinsheng Huang, M. Zuo
The binary k-out-of-n system is a commonly used reliability model in engineering practice. Many authors have extended the concept of binary k-out-of-n system to multi-state k-out-of-n systems, but with a limitation that k is assumed to be a constant at all the system levels. In this paper, a new definition of the multi-state k-out-of-n system is presented. Under the proposed definition, maintaining at least a certain system state level may require a different number of components to be at a certain state or above. The multi-state k-out-of-n system model has more complex properties than binary k-out-of-n systems. Increasing and decreasing multi-state k-out-of-n systems are two special types of the multi-state k-out-of-n system. The increasing multi-state k-out-of-n system has the dominant property, and as a result, we can treat it as a binary k-out-of-n system for each fixed required system state level. The decreasing multi-state k-out-of-n system does not belong to the dominant multi-state system group, and consequently, we can not extend all results from the binary k-out-of-n system to it. Examples are given to illustrate that the multi-state k-out-of-n system model can be used to describe various engineering systems.
{"title":"Multi-state k-out-of-n system model and its applications","authors":"Jinsheng Huang, M. Zuo","doi":"10.1109/RAMS.2000.816319","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816319","url":null,"abstract":"The binary k-out-of-n system is a commonly used reliability model in engineering practice. Many authors have extended the concept of binary k-out-of-n system to multi-state k-out-of-n systems, but with a limitation that k is assumed to be a constant at all the system levels. In this paper, a new definition of the multi-state k-out-of-n system is presented. Under the proposed definition, maintaining at least a certain system state level may require a different number of components to be at a certain state or above. The multi-state k-out-of-n system model has more complex properties than binary k-out-of-n systems. Increasing and decreasing multi-state k-out-of-n systems are two special types of the multi-state k-out-of-n system. The increasing multi-state k-out-of-n system has the dominant property, and as a result, we can treat it as a binary k-out-of-n system for each fixed required system state level. The decreasing multi-state k-out-of-n system does not belong to the dominant multi-state system group, and consequently, we can not extend all results from the binary k-out-of-n system to it. Examples are given to illustrate that the multi-state k-out-of-n system model can be used to describe various engineering systems.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130079190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816328
G. Turconi, E. Di Perna
Complex systems may have to meet severe availability objectives related to the importance of the service being provided; such systems must be fault tolerant. Designers of fault-tolerant systems try to implement diagnostics to detect as many faults as possible because, in complex systems, uncovered faults lead to latent highly undesired situations. Unfortunately, diagnostics themselves may fail. Starting from the basics of FMECA, a design methodology and a tool have been developed. It is called DIANA (DIagnostic ANAlysis). The basic idea of DIANA is to perform coverage analysis during hardware and firmware design together with reliability engineering analysis. To this purpose, DIANA has been integrated into the computer aided design (CAD) tools in the same way that logic simulation timing analysis and analog transmission simulation are performed. Two main results have been obtained by the DIANA project: the first is to give the designers a tool that helps them to think in such a way as to prevent uncovered fault situations; the second is to calculate the effects of faults on diagnostics in order to provide transition rates to system availability models when real, rather than ideal, cases are taken into account.
{"title":"A design tool for fault tolerant systems","authors":"G. Turconi, E. Di Perna","doi":"10.1109/RAMS.2000.816328","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816328","url":null,"abstract":"Complex systems may have to meet severe availability objectives related to the importance of the service being provided; such systems must be fault tolerant. Designers of fault-tolerant systems try to implement diagnostics to detect as many faults as possible because, in complex systems, uncovered faults lead to latent highly undesired situations. Unfortunately, diagnostics themselves may fail. Starting from the basics of FMECA, a design methodology and a tool have been developed. It is called DIANA (DIagnostic ANAlysis). The basic idea of DIANA is to perform coverage analysis during hardware and firmware design together with reliability engineering analysis. To this purpose, DIANA has been integrated into the computer aided design (CAD) tools in the same way that logic simulation timing analysis and analog transmission simulation are performed. Two main results have been obtained by the DIANA project: the first is to give the designers a tool that helps them to think in such a way as to prevent uncovered fault situations; the second is to calculate the effects of faults on diagnostics in order to provide transition rates to system availability models when real, rather than ideal, cases are taken into account.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"57 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131425525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816316
S. Jawaid, P. Rogers
Repetitive shock (RS) and electrodynamic (ED) vibration systems produce substantially different vibration conditions at the input point to the test article. These differences are most evident in terms of peak G level and spectrum content. The RS system produces vibration in short bursts which contain extremely high G amplitudes at the leading edge of each air hammer impact. The frequency content of the RS spectrum is nonuniform and exhibits many "holes" in the test spectrum. The ED system produces a continuous vibration time history that contains peak G amplitudes that vary within a moderate, programmable range. The distribution of vibration energy over the test spectrum is uniform and easily programmed using accelerometer feedback (closed-loop) control.
{"title":"Accelerated reliability test results: importance of input vibration spectrum and mechanical response of test article","authors":"S. Jawaid, P. Rogers","doi":"10.1109/RAMS.2000.816316","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816316","url":null,"abstract":"Repetitive shock (RS) and electrodynamic (ED) vibration systems produce substantially different vibration conditions at the input point to the test article. These differences are most evident in terms of peak G level and spectrum content. The RS system produces vibration in short bursts which contain extremely high G amplitudes at the leading edge of each air hammer impact. The frequency content of the RS spectrum is nonuniform and exhibits many \"holes\" in the test spectrum. The ED system produces a continuous vibration time history that contains peak G amplitudes that vary within a moderate, programmable range. The distribution of vibration energy over the test spectrum is uniform and easily programmed using accelerometer feedback (closed-loop) control.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130578421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816338
S. Cornford, M. Gibbel, M. Feather, D. Oberhettinger
NASA Code Q is supporting efforts to improve the verification and validation and the risk management processes for spaceflight projects. A physics-of-failure based Defect Detection and Prevention (DDP) methodology previously developed has been integrated into a software tool and is currently being implemented on various NASA projects and as part of NASA's new model-based spacecraft development environment. The DDP methodology begins with prioritizing the risks (or failure modes, FMs) relevant to a mission which need to be addressed. These risks can be reduced through the implementation of a set of detection and prevention activities referred to herein as PACTs (preventative measures, analyses, process controls and tests). Each of these PACTs has some effectiveness against one or more FMs but also has an associated resource cost. The FMs can be weighted according to their likelihood of occurrence and their mission impact should they occur. The net effectiveness of various combinations of PACTs can then be evaluated against these weighted FMs to obtain the residual risk for each of these FMs and the associated resource costs to achieve these risk levels. The process thus identifies the project-relevant "tall pole" FMs and design drivers and allows real time tailoring with the evolution of the design and technology content. The DDP methodology allows risk management in its truest sense: it identifies and assesses risk, provides options and tools for risk decision making and mitigation and allows for real-time tracking of current risk status.
{"title":"A physics/engineering of failure based analysis and tool for quantifying residual risks in hardware","authors":"S. Cornford, M. Gibbel, M. Feather, D. Oberhettinger","doi":"10.1109/RAMS.2000.816338","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816338","url":null,"abstract":"NASA Code Q is supporting efforts to improve the verification and validation and the risk management processes for spaceflight projects. A physics-of-failure based Defect Detection and Prevention (DDP) methodology previously developed has been integrated into a software tool and is currently being implemented on various NASA projects and as part of NASA's new model-based spacecraft development environment. The DDP methodology begins with prioritizing the risks (or failure modes, FMs) relevant to a mission which need to be addressed. These risks can be reduced through the implementation of a set of detection and prevention activities referred to herein as PACTs (preventative measures, analyses, process controls and tests). Each of these PACTs has some effectiveness against one or more FMs but also has an associated resource cost. The FMs can be weighted according to their likelihood of occurrence and their mission impact should they occur. The net effectiveness of various combinations of PACTs can then be evaluated against these weighted FMs to obtain the residual risk for each of these FMs and the associated resource costs to achieve these risk levels. The process thus identifies the project-relevant \"tall pole\" FMs and design drivers and allows real time tailoring with the evolution of the design and technology content. The DDP methodology allows risk management in its truest sense: it identifies and assesses risk, provides options and tools for risk decision making and mitigation and allows for real-time tracking of current risk status.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133926836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-01-24DOI: 10.1109/RAMS.2000.816326
M. Álvarez, T. Jackson
The methodology described herein identifies and quantifies equipment and system level failure modes based on the criticality of their effects on system functionality. This methodology is useful for systems that require high-reliability assessments, such as, spacecraft that are developed with "faster, better, cheaper" commercial processes. The authors developed this methodology by integrating the similarity/failure cause analyses methods developed by the International Electrical Commission (1999) with the process grading methods developed by the Reliability Analysis Center (1998). Since the advent of Acquisition Reform in 1994, the authors have studied the effectiveness of many of the "streamlined" reliability assessment techniques used in military space programs. What they learned is that every method can be of some use in identifying, mitigating or estimating reliability risk, but selecting the minimal set of methods for a high-reliability assessment requires looking beyond task names. Management organizations must understand how the product-performance objectives are supported by the methods used. Based on the authors' experiences, most spacecraft manufacturers do not practice performance-based reliability assessment methods, and yet they successfully meet or exceeded the predicted availability/reliability of their systems. However, the few satellite and launch vehicles that failed in 1998 and 1999 resulted in billions of dollars of financial losses and managerial shakeups at some major corporations. In general, public opinion is tolerant of these kinds of losses because they are perceived as the cost of doing commercial business in space. That is not the case when failure of a manned Space Shuttle transport occurs. Over the next few years, the commercial spacecraft industry will develop small to medium-size, single-stage manned-spacecraft. Organizations will have to use methods for identifying, mitigating and predicting critical failure modes more accurately than those currently used for unmanned-systems.
{"title":"Quantifying the effects of commercial processes on availability of small manned-spacecraft","authors":"M. Álvarez, T. Jackson","doi":"10.1109/RAMS.2000.816326","DOIUrl":"https://doi.org/10.1109/RAMS.2000.816326","url":null,"abstract":"The methodology described herein identifies and quantifies equipment and system level failure modes based on the criticality of their effects on system functionality. This methodology is useful for systems that require high-reliability assessments, such as, spacecraft that are developed with \"faster, better, cheaper\" commercial processes. The authors developed this methodology by integrating the similarity/failure cause analyses methods developed by the International Electrical Commission (1999) with the process grading methods developed by the Reliability Analysis Center (1998). Since the advent of Acquisition Reform in 1994, the authors have studied the effectiveness of many of the \"streamlined\" reliability assessment techniques used in military space programs. What they learned is that every method can be of some use in identifying, mitigating or estimating reliability risk, but selecting the minimal set of methods for a high-reliability assessment requires looking beyond task names. Management organizations must understand how the product-performance objectives are supported by the methods used. Based on the authors' experiences, most spacecraft manufacturers do not practice performance-based reliability assessment methods, and yet they successfully meet or exceeded the predicted availability/reliability of their systems. However, the few satellite and launch vehicles that failed in 1998 and 1999 resulted in billions of dollars of financial losses and managerial shakeups at some major corporations. In general, public opinion is tolerant of these kinds of losses because they are perceived as the cost of doing commercial business in space. That is not the case when failure of a manned Space Shuttle transport occurs. Over the next few years, the commercial spacecraft industry will develop small to medium-size, single-stage manned-spacecraft. Organizations will have to use methods for identifying, mitigating and predicting critical failure modes more accurately than those currently used for unmanned-systems.","PeriodicalId":178321,"journal":{"name":"Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130046509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}