Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513231
F. Born, N.H. Criscimagna
Military users state the requirements for a new system in the Operational Requirements Document. They do so using the measures with which they manage the systems. These measures are not suitable for specifying the system's needed performance to a contractor. A translation, therefore, from user needs to specifications is needed. A methodology for this translation process for reliability, maintainability, and diagnostics needs is being developed and incorporated in a PC software tool. The tool will provide a disciplined, auditable method for performing the needs to specification translation.
{"title":"Translating user diagnostics, reliability, and maintainability needs into specifications","authors":"F. Born, N.H. Criscimagna","doi":"10.1109/RAMS.1995.513231","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513231","url":null,"abstract":"Military users state the requirements for a new system in the Operational Requirements Document. They do so using the measures with which they manage the systems. These measures are not suitable for specifying the system's needed performance to a contractor. A translation, therefore, from user needs to specifications is needed. A methodology for this translation process for reliability, maintainability, and diagnostics needs is being developed and incorporated in a PC software tool. The tool will provide a disciplined, auditable method for performing the needs to specification translation.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123328785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513277
S. A. Doyle, J. Dugan, M. Boyd
We present a prototype implementation of a program to compute the unreliability of a system based on the digraph model of the system and coverage models for individual components. The C program we have written takes as input a system description describing failure modes in terms of a digraph model. This, as well as coverage probability information are used to produce a quantitative unreliability result. The problem being addressed is an important one. The goal is not only to improve the validity of the model being used, but to keep the framework simple, usable and adaptable. A more complete model allows for more realistic analysis. It is essential that life critical systems meet their required level of accuracy. Excluding any of the factors discussed here could result in serious miscalculations. One benefit of performing a quantitative analysis is the digraph models could be used to help analyze the dependability of the system being designed so as to facilitate tradeoff analysis when alternative designs are considered. Another attractive feature of the proposed approach is that it could be used in conjunction with pre-existing tools to enhance the diagnosis process that already exists without significantly affecting the time, money or effort involved. Within the concept of fault diagnosis, a quantitative analysis could allow a prioritization of lists of possible failure causes based on the probabilities associated with those events. In other words, the paths of the digraphs would be weighted so that most likely causes could be considered first.
{"title":"Combining imperfect coverage with digraph models","authors":"S. A. Doyle, J. Dugan, M. Boyd","doi":"10.1109/RAMS.1995.513277","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513277","url":null,"abstract":"We present a prototype implementation of a program to compute the unreliability of a system based on the digraph model of the system and coverage models for individual components. The C program we have written takes as input a system description describing failure modes in terms of a digraph model. This, as well as coverage probability information are used to produce a quantitative unreliability result. The problem being addressed is an important one. The goal is not only to improve the validity of the model being used, but to keep the framework simple, usable and adaptable. A more complete model allows for more realistic analysis. It is essential that life critical systems meet their required level of accuracy. Excluding any of the factors discussed here could result in serious miscalculations. One benefit of performing a quantitative analysis is the digraph models could be used to help analyze the dependability of the system being designed so as to facilitate tradeoff analysis when alternative designs are considered. Another attractive feature of the proposed approach is that it could be used in conjunction with pre-existing tools to enhance the diagnosis process that already exists without significantly affecting the time, money or effort involved. Within the concept of fault diagnosis, a quantitative analysis could allow a prioritization of lists of possible failure causes based on the probabilities associated with those events. In other words, the paths of the digraphs would be weighted so that most likely causes could be considered first.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117028433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513276
D. Followell
Design specifications for functional systems are typically derived from the extreme environmental conditions expected during their operational use. This practice can result in a system which is extremely over designed, and therefore excessively heavy, expensive and complex. A prime example is the military's requirement for cold temperature operations of -55 C, a temperature that has not been reached in twenty years. Equally likely, a system design based on operational environments may be inadequately designed since nonoperational environments, such as handling, transportation, storage, and maintenance may have been ignored. If these nonoperational environments prove to drive the durability of the system, failures will occur and the system's reliability will suffer, resulting in increased life cycle costs and reduced operational readiness. The United States Air Force has recognized this shortcoming in the design process and requires newly developed systems to be designed to endure the environments imposed by the entire life cycle profile-from manufacturing through deployment, operational usage and maintenance. Unfortunately, the procedures and data used to develop these life cycle profiles are not consistent from one development to the next. The Mission Environmental Requirements Integration Technology Program (MERIT) was created to provide a solution to this problem. This technology will result in decreased environmental definition costs, an optimum design for a given application, reduced cycle times and decreased life cycle warranty and maintenance costs.
{"title":"Enhancing supportability through life-cycle definitions","authors":"D. Followell","doi":"10.1109/RAMS.1995.513276","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513276","url":null,"abstract":"Design specifications for functional systems are typically derived from the extreme environmental conditions expected during their operational use. This practice can result in a system which is extremely over designed, and therefore excessively heavy, expensive and complex. A prime example is the military's requirement for cold temperature operations of -55 C, a temperature that has not been reached in twenty years. Equally likely, a system design based on operational environments may be inadequately designed since nonoperational environments, such as handling, transportation, storage, and maintenance may have been ignored. If these nonoperational environments prove to drive the durability of the system, failures will occur and the system's reliability will suffer, resulting in increased life cycle costs and reduced operational readiness. The United States Air Force has recognized this shortcoming in the design process and requires newly developed systems to be designed to endure the environments imposed by the entire life cycle profile-from manufacturing through deployment, operational usage and maintenance. Unfortunately, the procedures and data used to develop these life cycle profiles are not consistent from one development to the next. The Mission Environmental Requirements Integration Technology Program (MERIT) was created to provide a solution to this problem. This technology will result in decreased environmental definition costs, an optimum design for a given application, reduced cycle times and decreased life cycle warranty and maintenance costs.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129860540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513272
K. Majeske, G. Herrin
Changing market conditions and improved manufacturing quality are reflected in recent extensions of automobile warranty coverage from 12 month/12,000 miles to 5 years/50,000 miles and better. The reliability engineer's challenge to predict future warranty claims over a longer lifetime becomes even more difficult as the number of possible causal factors evolve from the "vital few" associated with early Pareto problem solving, to more diverse external contributing factors. Using initial vehicle warranty claim data to predict future warranty claims becomes even more difficult as automobile design and the assembly process continuously evolve via engineering changes throughout the product life cycle. This paper demonstrates graphical techniques, hazard analysis, and likelihood ratio tests for testing goodness-of-fit, the hypothesis of predictive validity for the proposed models. This work shows that automobile warranty data appear more appropriately predicted as Weibull/uniform and Poisson/binomial mixtures than individual Weibull and Poisson processes. Changes in the way automobile manufacturers store and view warranty data are necessary to implement the types of models in this work and will allow linking to engineering and manufacturing data to evaluate the effectiveness of ongoing product and process design changes.
{"title":"Assessing mixture-model goodness-of-fit with an application to automobile warranty data","authors":"K. Majeske, G. Herrin","doi":"10.1109/RAMS.1995.513272","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513272","url":null,"abstract":"Changing market conditions and improved manufacturing quality are reflected in recent extensions of automobile warranty coverage from 12 month/12,000 miles to 5 years/50,000 miles and better. The reliability engineer's challenge to predict future warranty claims over a longer lifetime becomes even more difficult as the number of possible causal factors evolve from the \"vital few\" associated with early Pareto problem solving, to more diverse external contributing factors. Using initial vehicle warranty claim data to predict future warranty claims becomes even more difficult as automobile design and the assembly process continuously evolve via engineering changes throughout the product life cycle. This paper demonstrates graphical techniques, hazard analysis, and likelihood ratio tests for testing goodness-of-fit, the hypothesis of predictive validity for the proposed models. This work shows that automobile warranty data appear more appropriately predicted as Weibull/uniform and Poisson/binomial mixtures than individual Weibull and Poisson processes. Changes in the way automobile manufacturers store and view warranty data are necessary to implement the types of models in this work and will allow linking to engineering and manufacturing data to evaluate the effectiveness of ongoing product and process design changes.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128728355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513265
M. Thaggard
NASA Headquarters is developing a risk-assessment-reliability-availability-maintainability (RRAMS) database architecture that includes two types of database files. The first file type incorporates historical information derived from test range launch performance records. The second type includes an assimilation of reliability calculations and statistical uncertainties determined after the launch performance reports were evaluated.
{"title":"Databases for reliability and probabilistic risk assessment","authors":"M. Thaggard","doi":"10.1109/RAMS.1995.513265","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513265","url":null,"abstract":"NASA Headquarters is developing a risk-assessment-reliability-availability-maintainability (RRAMS) database architecture that includes two types of database files. The first file type incorporates historical information derived from test range launch performance records. The second type includes an assimilation of reliability calculations and statistical uncertainties determined after the launch performance reports were evaluated.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"192 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120930942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513274
D. Hurst
Availability is a system performance parameter which provides insight into the probability that an item or system will be available to be committed to a specified requirement. Depending on the application, availability can be defined to include reliability, maintainability and logistic support information. For fleet management purposes, the ability to quantify availability in terms of all of its contributing elements is essential. This paper provides a discussion on a steady state operational availability model which can be used to assist the Canadian Air Force in its aircraft fleet management requirements. The availability model embodies scheduled and unscheduled maintenance and allows for impact analysis using in-service maintenance data. The model is sensitive to fleet size, aircraft flying rate, frequency of downing events, aircraft maintainability, scheduled inspection frequency and scheduled inspection duration. The predictive capability of this availability model is providing the Canadian Air Force with a more sophisticated maintenance analysis decision support capability. In order for this paper to be available for general distribution, it must be unclassified. As a result, the case studies presented do not reveal the actual operational availability of any Canadian Air Force fleet. However, the level of detail provided is more than adequate to illustrate the case studies and give insight into applications of the availability model.
{"title":"Operational availability modeling for risk and impact analysis","authors":"D. Hurst","doi":"10.1109/RAMS.1995.513274","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513274","url":null,"abstract":"Availability is a system performance parameter which provides insight into the probability that an item or system will be available to be committed to a specified requirement. Depending on the application, availability can be defined to include reliability, maintainability and logistic support information. For fleet management purposes, the ability to quantify availability in terms of all of its contributing elements is essential. This paper provides a discussion on a steady state operational availability model which can be used to assist the Canadian Air Force in its aircraft fleet management requirements. The availability model embodies scheduled and unscheduled maintenance and allows for impact analysis using in-service maintenance data. The model is sensitive to fleet size, aircraft flying rate, frequency of downing events, aircraft maintainability, scheduled inspection frequency and scheduled inspection duration. The predictive capability of this availability model is providing the Canadian Air Force with a more sophisticated maintenance analysis decision support capability. In order for this paper to be available for general distribution, it must be unclassified. As a result, the case studies presented do not reveal the actual operational availability of any Canadian Air Force fleet. However, the level of detail provided is more than adequate to illustrate the case studies and give insight into applications of the availability model.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"440 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121460395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513218
S. Liguore, D. Followell
Recent trends in reliability analysis of electronics has involved developing structural integrity models for predicting the failure free operating lifetime under vibratory and thermal environmental exposure. This paper describes a test program which was performed to obtain structural fatigue data for SMT solder joints exposed to a random vibration environment. A total of eight printed circuit board specimens with nine surface mounted components were fabricated and tested. Vibration time to failure data for individual solder joints of the SMT components were recorded. These data became the basis for understanding the physics of "why and how" SMT solder joints fail under vibration loading. Using procedures similar to those developed for aerospace structures, a fatigue model was developed that is based on the physics of the problem.
{"title":"Vibration fatigue of surface mount technology (SMT) solder joints","authors":"S. Liguore, D. Followell","doi":"10.1109/RAMS.1995.513218","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513218","url":null,"abstract":"Recent trends in reliability analysis of electronics has involved developing structural integrity models for predicting the failure free operating lifetime under vibratory and thermal environmental exposure. This paper describes a test program which was performed to obtain structural fatigue data for SMT solder joints exposed to a random vibration environment. A total of eight printed circuit board specimens with nine surface mounted components were fabricated and tested. Vibration time to failure data for individual solder joints of the SMT components were recorded. These data became the basis for understanding the physics of \"why and how\" SMT solder joints fail under vibration loading. Using procedures similar to those developed for aerospace structures, a fatigue model was developed that is based on the physics of the problem.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131132268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513256
M. L. Roush, Xiaozhong Wang
The problem to be addressed in this paper is the need for adequate methodologies to use in analyzing systems which are inherently dynamic and require analysis that is explicitly time dependent. This paper highlights an error that can arise in analyzing accident scenarios when time dependence is ignored. A simple straight-forward solution is provided without the necessity of utilizing more powerful (and hence more complex) dynamic event tree techniques. This possible solution is a straightforward extension of the current fault-tree/event-tree approach by incorporating a time-dependent algebraic formalism into fault-tree/event-tree analysis. A goal tree provides a success-oriented logic structure to efficiently implement the time-dependent logic for dynamic system analysis.
{"title":"Time-dependent logic for goal-oriented dynamic-system analysis","authors":"M. L. Roush, Xiaozhong Wang","doi":"10.1109/RAMS.1995.513256","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513256","url":null,"abstract":"The problem to be addressed in this paper is the need for adequate methodologies to use in analyzing systems which are inherently dynamic and require analysis that is explicitly time dependent. This paper highlights an error that can arise in analyzing accident scenarios when time dependence is ignored. A simple straight-forward solution is provided without the necessity of utilizing more powerful (and hence more complex) dynamic event tree techniques. This possible solution is a straightforward extension of the current fault-tree/event-tree approach by incorporating a time-dependent algebraic formalism into fault-tree/event-tree analysis. A goal tree provides a success-oriented logic structure to efficiently implement the time-dependent logic for dynamic system analysis.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132699093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513249
J. English, Li Yan, T. L. Landers
Burn-in and stress screening are becoming increasingly popular in the commercial electronics industry as customers become increasingly sensitive to failures occurring in the useful life of a product or system. For example, thermal stress screening (TSS) is an assembly-level electronics manufacturing process that evolved from the burn-in processes used in NASA and DoD programs. While burn-in subjects the product to expected field extremes to expose infant mortalities (latent failures), TSS briefly exposes a product to fast temperature rate-of-change and out-of-spec temperatures to trigger failures that would otherwise occur during the useful life of the product. In support of this known failure behavior, the classical bathtub curve should be modified to aid in the economic modeling of various screen types. We have conducted extensive modeling efforts that have resulted in a systematic approach to explicitly modeling the latent failures in the bathtub curve. In this paper, we describe the efforts that have been dedicated to model the latent failures known to exist in many products and systems. The resulting failure distribution is a truncated, mixed Weibull distribution. This model is proving to be an effective and relatively simple means to model the complex nature of failures of a system. With this increased flexibility, we can measure the impact of stress screens in varying conditions and ultimately design optimal screens.
{"title":"A modified bathtub curve with latent failures","authors":"J. English, Li Yan, T. L. Landers","doi":"10.1109/RAMS.1995.513249","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513249","url":null,"abstract":"Burn-in and stress screening are becoming increasingly popular in the commercial electronics industry as customers become increasingly sensitive to failures occurring in the useful life of a product or system. For example, thermal stress screening (TSS) is an assembly-level electronics manufacturing process that evolved from the burn-in processes used in NASA and DoD programs. While burn-in subjects the product to expected field extremes to expose infant mortalities (latent failures), TSS briefly exposes a product to fast temperature rate-of-change and out-of-spec temperatures to trigger failures that would otherwise occur during the useful life of the product. In support of this known failure behavior, the classical bathtub curve should be modified to aid in the economic modeling of various screen types. We have conducted extensive modeling efforts that have resulted in a systematic approach to explicitly modeling the latent failures in the bathtub curve. In this paper, we describe the efforts that have been dedicated to model the latent failures known to exist in many products and systems. The resulting failure distribution is a truncated, mixed Weibull distribution. This model is proving to be an effective and relatively simple means to model the complex nature of failures of a system. With this increased flexibility, we can measure the impact of stress screens in varying conditions and ultimately design optimal screens.","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124236718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-01-16DOI: 10.1109/RAMS.1995.513280
David I. Heimann
CATS (Complexity Analysis and Tracking System) is a complexity-tracking system which uses the McCabe complexity analysis tool to construct and maintain an ongoing database of structural complexity values for a software system as it proceeds through its development and testing. Building on previous work which indicated a correlation between structural complexity and defect corrections, CATS allows for a tighter focus of code review efforts such as walkthroughs and inspections and aids in the design of regression, unit, and system tests. CATS has been implemented into the development and testing process for an operation-system software component denoted here as System A. The process for CATS implementation at System A involves two ongoing groups, the BIT (Build, Inspect, and Test) team and the development reams. The BIT team builds the source files, runs CATS, identifies modules for special attention in review and testing, uses the complexity information to design and execute test suites, and reports results to the development teams through a notes-files conference. The development teams use the information in their code efforts, and report their responses and experiences through replies in the notes-file. This creates a body of data, experience, and lessons-learned for use in further development. A CATS analysis has also been carried out for an operating-system facility in VMS (denoted as Facility B).
{"title":"Using complexity-tracking in software development","authors":"David I. Heimann","doi":"10.1109/RAMS.1995.513280","DOIUrl":"https://doi.org/10.1109/RAMS.1995.513280","url":null,"abstract":"CATS (Complexity Analysis and Tracking System) is a complexity-tracking system which uses the McCabe complexity analysis tool to construct and maintain an ongoing database of structural complexity values for a software system as it proceeds through its development and testing. Building on previous work which indicated a correlation between structural complexity and defect corrections, CATS allows for a tighter focus of code review efforts such as walkthroughs and inspections and aids in the design of regression, unit, and system tests. CATS has been implemented into the development and testing process for an operation-system software component denoted here as System A. The process for CATS implementation at System A involves two ongoing groups, the BIT (Build, Inspect, and Test) team and the development reams. The BIT team builds the source files, runs CATS, identifies modules for special attention in review and testing, uses the complexity information to design and execute test suites, and reports results to the development teams through a notes-files conference. The development teams use the information in their code efforts, and report their responses and experiences through replies in the notes-file. This creates a body of data, experience, and lessons-learned for use in further development. A CATS analysis has also been carried out for an operating-system facility in VMS (denoted as Facility B).","PeriodicalId":143102,"journal":{"name":"Annual Reliability and Maintainability Symposium 1995 Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116295763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}