Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173299
J. Munson, A. Nikora
An important aspect of developing models relating the number and type of faults in a software system to a set of structural measurement is defining what constitutes a fault. By definition, a fault is a structural imperfection in a software system that may lead to the system's eventually failing. A measurable and precise definition of what faults are makes it possible to accurately identify and count them, which in turn allows the formulation of models relating fault counts and types to other measurable attributes of a software system. Unfortunately, the most widely-used definitions are not measurable; there is no guarantee that two different individuals looking at the same set of failure reports and the same set of fault definitions will count the same number of underlying faults. The incomplete and ambiguous nature of current fault definitions adds a noise component to the inputs used in modeling fault content. If this noise component is sufficiently large, any attempt to develop a fault model will produce invalid results. As part of our on-going work in modeling software faults, we have developed a method of unambiguously identifying and counting faults. Specifically, we base our recognition and enumeration of software faults on the grammar of the language of the software system. By tokenizing the differences between a version of the system exhibiting a particular failure behavior, and the version in which changes were made to eliminate that behavior, we are able to unambiguously count the number of faults associated with that failure. With modern configuration management tools, the identification and counting of software faults can be automated.
{"title":"Toward a quantifiable definition of software faults","authors":"J. Munson, A. Nikora","doi":"10.1109/ISSRE.2002.1173299","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173299","url":null,"abstract":"An important aspect of developing models relating the number and type of faults in a software system to a set of structural measurement is defining what constitutes a fault. By definition, a fault is a structural imperfection in a software system that may lead to the system's eventually failing. A measurable and precise definition of what faults are makes it possible to accurately identify and count them, which in turn allows the formulation of models relating fault counts and types to other measurable attributes of a software system. Unfortunately, the most widely-used definitions are not measurable; there is no guarantee that two different individuals looking at the same set of failure reports and the same set of fault definitions will count the same number of underlying faults. The incomplete and ambiguous nature of current fault definitions adds a noise component to the inputs used in modeling fault content. If this noise component is sufficiently large, any attempt to develop a fault model will produce invalid results. As part of our on-going work in modeling software faults, we have developed a method of unambiguously identifying and counting faults. Specifically, we base our recognition and enumeration of software faults on the grammar of the language of the software system. By tokenizing the differences between a version of the system exhibiting a particular failure behavior, and the version in which changes were made to eliminate that behavior, we are able to unambiguously count the number of faults associated with that failure. With modern configuration management tools, the identification and counting of software faults can be automated.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116562488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173209
G. Lussier, H. Waeselynck
This paper aims at verifying properties of generic fault-tolerance algorithms. Our goal is to enhance the testing process with information extracted from the proof of the algorithm, whether this proof is formal or informal: ideally, testing is intended to focus on the weak parts of the proof (e.g., unproved lemmas or doubtful informal evidence). We use the Fault-Tolerant Rate Monotonic Scheduling algorithm as a case study. This algorithm was proven by informal demonstration, but two faults were revealed afterwards. In this paper, we focus on the analysis of the informal proof, which we restructure in a semiformal proof tree based on natural deduction. From this proof tree, we extract several functional cases and use them for testing a prototype of the algorithm. Experimental results show that a flawed informal proof does not necessarily provide relevant information for testing. It remains to investigate whether formal (partial) proofs allow better connection with potential faults.
{"title":"Informal proof analysis towards testing enhancement","authors":"G. Lussier, H. Waeselynck","doi":"10.1109/ISSRE.2002.1173209","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173209","url":null,"abstract":"This paper aims at verifying properties of generic fault-tolerance algorithms. Our goal is to enhance the testing process with information extracted from the proof of the algorithm, whether this proof is formal or informal: ideally, testing is intended to focus on the weak parts of the proof (e.g., unproved lemmas or doubtful informal evidence). We use the Fault-Tolerant Rate Monotonic Scheduling algorithm as a case study. This algorithm was proven by informal demonstration, but two faults were revealed afterwards. In this paper, we focus on the analysis of the informal proof, which we restructure in a semiformal proof tree based on natural deduction. From this proof tree, we extract several functional cases and use them for testing a prototype of the algorithm. Experimental results show that a flawed informal proof does not necessarily provide relevant information for testing. It remains to investigate whether formal (partial) proofs allow better connection with potential faults.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128124627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}