Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513226
D. Nicol, D. Palumbo, M.L. Ulrey
A new graphical reliability estimation tool, Reliability Performance Module (RPM), is described. RPM combines the features of a powerful reliability tool, Reliability Estimation System Testbed (REST), developed at NASA Langley, with the convenient graphical modelling and simulation capabilities of an off-the-shelf commercial software package, Block Oriented Network Simulator (BONeS), from the Alta Group of Cadence Design Systems. In order to estimate the reliability of a system, the built-in BONeS graphics capabilities are used to describe the system, and the embedded REST execution engine produces a reliability analysis automatically. An additional benefit of this approach is that a detailed failure modes and effects analysis can be derived by using the simulation capabilities of the tool. The usage of and output from RPM is demonstrated with an example system. As compared to our current design process, RPM promises to reduce overall modelling and analysis time, provide better documentation, make trade studies easier, create reusable modelling components and subsystems, and provide the integration of reliability and timing analysis necessary to guarantee the safety of critical real-time systems. Future work will concentrate on producing a more seamless integration of the reliability and timing analyses. Additional planned enhancements include a distributed (parallel) processing mode, and availability and phased-mission analysis capabilities.
{"title":"A graphical model-based reliability estimation tool and failure mode and effects simulator","authors":"D. Nicol, D. Palumbo, M.L. Ulrey","doi":"10.1109/RAMS.1995.513226","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513226","url":null,"abstract":"A new graphical reliability estimation tool, Reliability Performance Module (RPM), is described. RPM combines the features of a powerful reliability tool, Reliability Estimation System Testbed (REST), developed at NASA Langley, with the convenient graphical modelling and simulation capabilities of an off-the-shelf commercial software package, Block Oriented Network Simulator (BONeS), from the Alta Group of Cadence Design Systems. In order to estimate the reliability of a system, the built-in BONeS graphics capabilities are used to describe the system, and the embedded REST execution engine produces a reliability analysis automatically. An additional benefit of this approach is that a detailed failure modes and effects analysis can be derived by using the simulation capabilities of the tool. The usage of and output from RPM is demonstrated with an example system. As compared to our current design process, RPM promises to reduce overall modelling and analysis time, provide better documentation, make trade studies easier, create reusable modelling components and subsystems, and provide the integration of reliability and timing analysis necessary to guarantee the safety of critical real-time systems. Future work will concentrate on producing a more seamless integration of the reliability and timing analyses. Additional planned enhancements include a distributed (parallel) processing mode, and availability and phased-mission analysis capabilities.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132463625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513221
A. Kostić, A. Rensch, D. Sturm
A new failure mechanism, nickel dendrites, was identified in hermetic ceramic packages. Nickel dendrites resulted from an unauthorized change in the supplier's assembly process. The change caused lots to be produced with package ambient moisture levels ranging from 10% by volume to 20% by volume. Device cooling in the system application reduced the package temperature below the dew point of the internal package ambient and allowed water to condense. The liquid water absorbed materials from the ambient atmosphere and reacted with the nickel underplating of the package conductors. Normal operating voltages provided the electrical potential necessary for the growth of nickel dendrites. Burn-in was not effective in screening out this failure mechanism because temperature during burn-in was above the dew point of the package ambient. The supplier revised their assembly procedures to prevent unauthorized process changes of this type. UNISYS purged all devices in the suspect date code range from the factory and field inventory. Corrective actions were implemented by UNISYS and the supplier with the result that this failure mechanism was eliminated from both field and factory. The nickel dendrite failure mechanism has not been reported in any literature. Hermetic ceramic packaging is widely used. The existence of a new failure mechanism has tremendous potential impact on product reliability, process controls, reliability prediction, and failure analysis.
{"title":"Nickel dendrites: a new failure mechanism in ceramic hermetic packages","authors":"A. Kostić, A. Rensch, D. Sturm","doi":"10.1109/RAMS.1995.513221","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513221","url":null,"abstract":"A new failure mechanism, nickel dendrites, was identified in hermetic ceramic packages. Nickel dendrites resulted from an unauthorized change in the supplier's assembly process. The change caused lots to be produced with package ambient moisture levels ranging from 10% by volume to 20% by volume. Device cooling in the system application reduced the package temperature below the dew point of the internal package ambient and allowed water to condense. The liquid water absorbed materials from the ambient atmosphere and reacted with the nickel underplating of the package conductors. Normal operating voltages provided the electrical potential necessary for the growth of nickel dendrites. Burn-in was not effective in screening out this failure mechanism because temperature during burn-in was above the dew point of the package ambient. The supplier revised their assembly procedures to prevent unauthorized process changes of this type. UNISYS purged all devices in the suspect date code range from the factory and field inventory. Corrective actions were implemented by UNISYS and the supplier with the result that this failure mechanism was eliminated from both field and factory. The nickel dendrite failure mechanism has not been reported in any literature. Hermetic ceramic packaging is widely used. The existence of a new failure mechanism has tremendous potential impact on product reliability, process controls, reliability prediction, and failure analysis.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125262261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513283
C. Peláez, J. Bowles
A failure mode and effects analysis (FMEA) seeks to determine how a system will behave in the event of a device failure. It involves the integration of several expert tasks to select components for analysis, determine failure modes, predict failure effects, propose corrective actions, etc. During an FMEA, numerical values are often not available or applicable and qualitative thresholds and linguistic terms such as high, slightly high, low, etc., are usually more relevant to the design than numerical expressions. Fuzzy set theory and fuzzy cognitive maps provide a basis for automating much of the reasoning required to carry out an FMEA on a system. They offer a suitable technique to allow symbolic reasoning in the FMEA instead of numerical methods, thus providing human like interpretations of the system model under analysis, and they allow for the integration of multiple expert opinions. This paper describes how fuzzy cognitive maps can be used to describe a system, its missions, failure modes, their causes and effects. The maps can then be evaluated using both numerical and graphical methods to determine the effects of a failure and the consistency of design decisions.
{"title":"Applying fuzzy cognitive-maps knowledge-representation to failure modes effects analysis","authors":"C. Peláez, J. Bowles","doi":"10.1109/RAMS.1995.513283","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513283","url":null,"abstract":"A failure mode and effects analysis (FMEA) seeks to determine how a system will behave in the event of a device failure. It involves the integration of several expert tasks to select components for analysis, determine failure modes, predict failure effects, propose corrective actions, etc. During an FMEA, numerical values are often not available or applicable and qualitative thresholds and linguistic terms such as high, slightly high, low, etc., are usually more relevant to the design than numerical expressions. Fuzzy set theory and fuzzy cognitive maps provide a basis for automating much of the reasoning required to carry out an FMEA on a system. They offer a suitable technique to allow symbolic reasoning in the FMEA instead of numerical methods, thus providing human like interpretations of the system model under analysis, and they allow for the integration of multiple expert opinions. This paper describes how fuzzy cognitive maps can be used to describe a system, its missions, failure modes, their causes and effects. The maps can then be evaluated using both numerical and graphical methods to determine the effects of a failure and the consistency of design decisions.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115355904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513252
C. Benski, E. Cabau
The purpose of this paper is to present a summary of the most extensive benchmark conducted to assess the performance of nine numerical techniques applied to analyze unreplicated experimental designs. These designs have been previously shown to be relevant to reliability growth programs. The numerical techniques evolved out of the difficulty in using the classical analysis of variance methods when the measured response was not replicated. Since they are of precious value under these circumstances, it was considered important to assess their statistical performance under typical experimental conditions. The authors introduce a figure of merit to rank the techniques according to their ability to identify active factors and reject spurious ones. Using this figure of merit they show that, in spite of their great conceptual differences, the nine techniques perform similarly.
{"title":"New benchmark for unreplicated experimental-design analysis","authors":"C. Benski, E. Cabau","doi":"10.1109/RAMS.1995.513252","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513252","url":null,"abstract":"The purpose of this paper is to present a summary of the most extensive benchmark conducted to assess the performance of nine numerical techniques applied to analyze unreplicated experimental designs. These designs have been previously shown to be relevant to reliability growth programs. The numerical techniques evolved out of the difficulty in using the classical analysis of variance methods when the measured response was not replicated. Since they are of precious value under these circumstances, it was considered important to assess their statistical performance under typical experimental conditions. The authors introduce a figure of merit to rank the techniques according to their ability to identify active factors and reject spurious ones. Using this figure of merit they show that, in spite of their great conceptual differences, the nine techniques perform similarly.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128288411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513287
C. Smidts, D. Sova
The fundamental aim of this study is to better understand the software testing process within the Software Engineering Laboratory (SEL) to be able to continually improve the software development process. In particular we compare three testing methodologies employed within the SEL. The software development life cycle process, the testing methodologies and their comparison, and the software application are discussed.
{"title":"A comparison of software-testing methodologies","authors":"C. Smidts, D. Sova","doi":"10.1109/RAMS.1995.513287","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513287","url":null,"abstract":"The fundamental aim of this study is to better understand the software testing process within the Software Engineering Laboratory (SEL) to be able to continually improve the software development process. In particular we compare three testing methodologies employed within the SEL. The software development life cycle process, the testing methodologies and their comparison, and the software application are discussed.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124463195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513241
E. Demko
Reliability development growth testing (RDGT) is the most common method used to improve equipment reliability. The author had an opportunity to perform an analysis of hardware that experienced environmental stress screening (ESS), environmental qualification testing (EQT), RDGT and field usage. The failure mode and corrective action data were used to qualitatively assess the effectiveness of RDGT testing. The results of this analysis yield the following conclusions: (1) RDGT is not a very good precipitator of field related failure modes, therefore RDGT alone does not appear to be a strong driver of reliability growth; (2) RDGT, EQT, ESS, and EQT tests precipitate a high percentage of failure modes that occur only in "chamber-type" environments, and are not related to field use; (3) of the three "chamber-type" tests (ESS, RDGT, and EQT) evaluated as precipitators of field related failure modes, ESS appears to be the most effective; and (4) "chamber-type" tests are more efficient in developing corrective actions than field operation.
{"title":"On reliability growth testing","authors":"E. Demko","doi":"10.1109/RAMS.1995.513241","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513241","url":null,"abstract":"Reliability development growth testing (RDGT) is the most common method used to improve equipment reliability. The author had an opportunity to perform an analysis of hardware that experienced environmental stress screening (ESS), environmental qualification testing (EQT), RDGT and field usage. The failure mode and corrective action data were used to qualitatively assess the effectiveness of RDGT testing. The results of this analysis yield the following conclusions: (1) RDGT is not a very good precipitator of field related failure modes, therefore RDGT alone does not appear to be a strong driver of reliability growth; (2) RDGT, EQT, ESS, and EQT tests precipitate a high percentage of failure modes that occur only in \"chamber-type\" environments, and are not related to field use; (3) of the three \"chamber-type\" tests (ESS, RDGT, and EQT) evaluated as precipitators of field related failure modes, ESS appears to be the most effective; and (4) \"chamber-type\" tests are more efficient in developing corrective actions than field operation.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124477499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513254
D.W. Jacobson, S. Arora
Most current state-of-the-art availability models are based on continuous-time Markov chains. This involves restrictive assumption about the probability distribution for both failure times and repair times being exponential. In many situations, the exponential distribution is not applicable for failure times and/or repair times. A general approach for calculating instantaneous availability is presented. It is applicable to systems or subsystems which are assumed to be returned to approximately their original state upon the completion of repair. It is based on the equation: A(t)=R(t)+/spl int//sup t//sub 0/R(t-s)m(s)ds. The first case study is a validation study since the uptimes and downtimes are both assumed to follow an exponential distribution. In this case, an analytical result for A(t) can be obtained. Thus, the results for the analytical approach and the proposed approach can be compared. An analysis of the results shows the proposed approach to be very reasonable. In the second case study, the uptimes are assumed to follow a Weibull distribution while the downtimes have a lognormal distribution.
{"title":"A nonexponential approach to availability modeling","authors":"D.W. Jacobson, S. Arora","doi":"10.1109/RAMS.1995.513254","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513254","url":null,"abstract":"Most current state-of-the-art availability models are based on continuous-time Markov chains. This involves restrictive assumption about the probability distribution for both failure times and repair times being exponential. In many situations, the exponential distribution is not applicable for failure times and/or repair times. A general approach for calculating instantaneous availability is presented. It is applicable to systems or subsystems which are assumed to be returned to approximately their original state upon the completion of repair. It is based on the equation: A(t)=R(t)+/spl int//sup t//sub 0/R(t-s)m(s)ds. The first case study is a validation study since the uptimes and downtimes are both assumed to follow an exponential distribution. In this case, an analytical result for A(t) can be obtained. Thus, the results for the analytical approach and the proposed approach can be compared. An analysis of the results shows the proposed approach to be very reasonable. In the second case study, the uptimes are assumed to follow a Weibull distribution while the downtimes have a lognormal distribution.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124232875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513255
J.A. Stanshine
A silent failure is a condition in a telecommunications or other system which, when it occurs, remains undetected by normal fault detection methods. With the steady state Markov models that are usually used to predict telecommunications system reliability, estimates of downtime of systems with silent failures may be substantially higher than actual system downtime. This is due to the fact that a system with silent failures frequently comes nowhere close to approaching steady state during the system's finite life. This paper proposes a modification to the standard steady state Markov reliability models. The proposed modification involves the addition of a state transition effectively representing complete replacement of a system under study. With the modified model, this transition occurs at a rate 2/T, where T is system or study life. The paper includes examples and theorems that demonstrate that the method produces accurate results in a wide range of circumstances.
{"title":"Modeling silent failures in telecommunications systems","authors":"J.A. Stanshine","doi":"10.1109/RAMS.1995.513255","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513255","url":null,"abstract":"A silent failure is a condition in a telecommunications or other system which, when it occurs, remains undetected by normal fault detection methods. With the steady state Markov models that are usually used to predict telecommunications system reliability, estimates of downtime of systems with silent failures may be substantially higher than actual system downtime. This is due to the fact that a system with silent failures frequently comes nowhere close to approaching steady state during the system's finite life. This paper proposes a modification to the standard steady state Markov reliability models. The proposed modification involves the addition of a state transition effectively representing complete replacement of a system under study. With the modified model, this transition occurs at a rate 2/T, where T is system or study life. The paper includes examples and theorems that demonstrate that the method produces accurate results in a wide range of circumstances.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125949941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513232
R. C. Lisk
With the increasing demand to accomplish scientific missions with fewer resources, NASA has been reexamining its technical approaches to its research and development activities. The Agency Strategic Plan, approved by the Administrator in the spring of 1994, states: "We will conduct our programs such that we are the recognized international leader in the safety, quality and mission assurance activities. We will use a systematic and disciplined approach involving the adequacy, oversight, and support to the technical risk decision making process." The Office of Safety and Mission Assurance (OSMA) at NASA Headquarters has the responsibility for making this operating principle a reality. The Office of Safety and Mission Assurance has expressed as its objectives: (1) establish/maintain SRM&QA functions as aggressive contributing elements in the planning, development and implementation of NASA programs and strategic enterprises; (2) continually refine the NASA Safety and Mission Assurance Program to anticipate evolving technological requirements; (3) promote technical excellence and continual improvement in SRM&QA products and services in support of our program customers; and (4) promote the development of innovative methods/techniques to achieve safety and mission success and S&MA technology advancement.
{"title":"The role of the R&M disciplines in the new NASA","authors":"R. C. Lisk","doi":"10.1109/RAMS.1995.513232","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513232","url":null,"abstract":"With the increasing demand to accomplish scientific missions with fewer resources, NASA has been reexamining its technical approaches to its research and development activities. The Agency Strategic Plan, approved by the Administrator in the spring of 1994, states: \"We will conduct our programs such that we are the recognized international leader in the safety, quality and mission assurance activities. We will use a systematic and disciplined approach involving the adequacy, oversight, and support to the technical risk decision making process.\" The Office of Safety and Mission Assurance (OSMA) at NASA Headquarters has the responsibility for making this operating principle a reality. The Office of Safety and Mission Assurance has expressed as its objectives: (1) establish/maintain SRM&QA functions as aggressive contributing elements in the planning, development and implementation of NASA programs and strategic enterprises; (2) continually refine the NASA Safety and Mission Assurance Program to anticipate evolving technological requirements; (3) promote technical excellence and continual improvement in SRM&QA products and services in support of our program customers; and (4) promote the development of innovative methods/techniques to achieve safety and mission success and S&MA technology advancement.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126559490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513263
S.K. Iwohara, Dar-Biau Liu
Previously, software metrics have been established to evaluate the software development process throughout the software life cycle, and have been effective in helping to determine how a software design is progressing. These metrics are used to uncover favorable and unfavorable design trends and identify potential problems and deficiencies early in the development process to reduce costly redesign or the delivery of immature error prone software. One area where design metrics plays an important role is in the identification of misunderstandings between the software engineer and the system or user requirements due to incorrect or ambiguous statements of requirements. However, the metrics developed to date do not consider the additional interface to the safety engineer when developing critical systems. Because a software error in a computer controlled critical system can potentially result in death, injury, loss of equipment or property, or environmental harm, a safety metrics set was developed to ensure that the safety requirements are well understood and correctly implemented by the software engineer. This paper presents a safety metrics set that can be used to evaluate the maturity of hazard analysis processes and its interaction with the software development process.
{"title":"A verification tool to measure software in critical systems","authors":"S.K. Iwohara, Dar-Biau Liu","doi":"10.1109/RAMS.1995.513263","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513263","url":null,"abstract":"Previously, software metrics have been established to evaluate the software development process throughout the software life cycle, and have been effective in helping to determine how a software design is progressing. These metrics are used to uncover favorable and unfavorable design trends and identify potential problems and deficiencies early in the development process to reduce costly redesign or the delivery of immature error prone software. One area where design metrics plays an important role is in the identification of misunderstandings between the software engineer and the system or user requirements due to incorrect or ambiguous statements of requirements. However, the metrics developed to date do not consider the additional interface to the safety engineer when developing critical systems. Because a software error in a computer controlled critical system can potentially result in death, injury, loss of equipment or property, or environmental harm, a safety metrics set was developed to ensure that the safety requirements are well understood and correctly implemented by the software engineer. This paper presents a safety metrics set that can be used to evaluate the maturity of hazard analysis processes and its interaction with the software development process.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115008639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}