Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613302
R. Leach, D. Coleman
Many software metrics are based on analysis of individual source code modules and do not consider the way that modules are interconnected. This presents a special problem for many current software development project environments that utilize a considerable amount of commercial, off-the-shelf or other reusable software components and devote a considerable amount of time to testing and integrating such components. We describe a new metric called the BVA metric that is based on an assessment of the coupling between program subunits. The metric is based on the testing theory technique known as boundary value analysis. For each parameter or global variable in a program module or subunit, we compute the number of test cases necessary for a "black-box" test of a program subunit based on partitioning that portion of the domain of the subunit that is affected by the parameter. The BVA metric can be computed relatively early in the software life cycle. Experiments in several different languages and both academic and industrial programming environments suggest a close predictive relationship with the density of logical software errors and also with integration and testing effort.
{"title":"A software metric for logical errors and integration testing effort","authors":"R. Leach, D. Coleman","doi":"10.1109/CMPASS.1997.613302","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613302","url":null,"abstract":"Many software metrics are based on analysis of individual source code modules and do not consider the way that modules are interconnected. This presents a special problem for many current software development project environments that utilize a considerable amount of commercial, off-the-shelf or other reusable software components and devote a considerable amount of time to testing and integrating such components. We describe a new metric called the BVA metric that is based on an assessment of the coupling between program subunits. The metric is based on the testing theory technique known as boundary value analysis. For each parameter or global variable in a program module or subunit, we compute the number of test cases necessary for a \"black-box\" test of a program subunit based on partitioning that portion of the domain of the subunit that is affected by the parameter. The BVA metric can be computed relatively early in the software life cycle. Experiments in several different languages and both academic and industrial programming environments suggest a close predictive relationship with the density of logical software errors and also with integration and testing effort.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131957246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613286
L. Hatton
This position paper traces a very personal view of formal methods in the period 1982-1997. The author describes his own experiences in formal methods all the way from outright belief in the power of mathematics in the early 1980s, to a measurement-tempered and rather cautious optimism in the late 1990s.
{"title":"\"What is a formal method (and what is an informal method)?\"","authors":"L. Hatton","doi":"10.1109/CMPASS.1997.613286","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613286","url":null,"abstract":"This position paper traces a very personal view of formal methods in the period 1982-1997. The author describes his own experiences in formal methods all the way from outright belief in the power of mathematics in the early 1980s, to a measurement-tempered and rather cautious optimism in the late 1990s.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130546864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613237
C. Michael, R. C. Jones
This paper presents an empirical study of an important aspect of software defect behavior: the propagation of data-state errors. A data-state error occurs when a fault is executed and affects a program's data-state, and it is said to propagate if it affects the outcome of the execution. Our results show that data-state errors appear to have a property that is quite useful when simulating faulty code: for a given input, it appears that either all data state errors injected at a given location tend to propagate to the output, or else none of them do. These results are interesting because of what they indicate about the behavior of data-state errors in software. They suggest that data state errors behave in an orderly way, and that the behavior of software may not be as unpredictable as it could theoretically be. Additionally, if all faults behave the same for a given input and a given location, then one can use simulation to get a good picture of how faults behave, regardless of whether the faults one has simulated are representative of real faults.
{"title":"On the uniformity of error propagation in software","authors":"C. Michael, R. C. Jones","doi":"10.1109/CMPASS.1997.613237","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613237","url":null,"abstract":"This paper presents an empirical study of an important aspect of software defect behavior: the propagation of data-state errors. A data-state error occurs when a fault is executed and affects a program's data-state, and it is said to propagate if it affects the outcome of the execution. Our results show that data-state errors appear to have a property that is quite useful when simulating faulty code: for a given input, it appears that either all data state errors injected at a given location tend to propagate to the output, or else none of them do. These results are interesting because of what they indicate about the behavior of data-state errors in software. They suggest that data state errors behave in an orderly way, and that the behavior of software may not be as unpredictable as it could theoretically be. Additionally, if all faults behave the same for a given input and a given location, then one can use simulation to get a good picture of how faults behave, regardless of whether the faults one has simulated are representative of real faults.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"751 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132518221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613307
M. Hecht, Dong Tang, Herbert Hecht, Sohar, Beverly Hills, Robert W. Brill
In many cases, it is possible to derive a quantitative reliability or availability assessment for systems containing software with the appropriate use of system-level measurement-based modeling and supporting data. This paper demonstrates the system-level measurement based approach using a simplified safety protection system example. The approach is contrasted with other software reliability prediction methodologies. The treatment of multiple correlated and common mode failures, systematic failures, and degraded states are also discussed. Finally a tool called MEADEP, which is now under development, is described. The objective of the tool is to reduce the system-level measurement-based approach to a practical task that can be performed on systems with element failure rates as low as 10/sup -6/ per hour.
{"title":"Quantitative reliability and availability assessment for critical systems including software","authors":"M. Hecht, Dong Tang, Herbert Hecht, Sohar, Beverly Hills, Robert W. Brill","doi":"10.1109/CMPASS.1997.613307","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613307","url":null,"abstract":"In many cases, it is possible to derive a quantitative reliability or availability assessment for systems containing software with the appropriate use of system-level measurement-based modeling and supporting data. This paper demonstrates the system-level measurement based approach using a simplified safety protection system example. The approach is contrasted with other software reliability prediction methodologies. The treatment of multiple correlated and common mode failures, systematic failures, and degraded states are also discussed. Finally a tool called MEADEP, which is now under development, is described. The objective of the tool is to reduce the system-level measurement-based approach to a practical task that can be performed on systems with element failure rates as low as 10/sup -6/ per hour.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126773963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613262
S. Gokhale, P. Marinos, M. Lyn, Kishor S. Trivedi
Software reliability is an important metric that quantifies the quality of the software product and is inversely related to the number of unrepaired faults in the system. Fault removal is a critical process in achieving the desired level of quality before software deployment in the field. Conventional software reliability models assume that the time to remove a fault is negligible and that the repair process is perfect. We examine various kinds of repair scenarios, and analyze the effect of these fault removal policies on the residual number of faults at the end of the testing process, using a non-homogeneous continuous time Markov chain. The fault removal rate is initially assumed to be constant, and it is subsequently extended to cover time and state dependencies. These fault removal scenarios can be easily incorporated using the state space view of the non-homogeneous Poisson process.
{"title":"Effect of repair policies on software reliability","authors":"S. Gokhale, P. Marinos, M. Lyn, Kishor S. Trivedi","doi":"10.1109/CMPASS.1997.613262","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613262","url":null,"abstract":"Software reliability is an important metric that quantifies the quality of the software product and is inversely related to the number of unrepaired faults in the system. Fault removal is a critical process in achieving the desired level of quality before software deployment in the field. Conventional software reliability models assume that the time to remove a fault is negligible and that the repair process is perfect. We examine various kinds of repair scenarios, and analyze the effect of these fault removal policies on the residual number of faults at the end of the testing process, using a non-homogeneous continuous time Markov chain. The fault removal rate is initially assumed to be constant, and it is subsequently extended to cover time and state dependencies. These fault removal scenarios can be easily incorporated using the state space view of the non-homogeneous Poisson process.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128679033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613206
C. Heitmeyer, J. Kirby, B. Labaw
Although formal methods for developing computer systems have been available for more than a decade, few have had significant impact in practice. A major barrier to their use is that software developers find formal methods difficult to understand and apply. One exception is a formal method called SCR for specifying computer system requirements which, due to its easy to use tabular notation and its demonstrated scalability, has already achieved some success in industry. Recently a set of software tools, including a specification editor, a consistency checker, a simulator, and a verifier has been developed to support the SCR method. This paper describes recent enhancements to the SCR tools: a new dependency graph browser which displays the dependencies among the variables in the specification, an improved consistency checker which produces detailed feedback about detected errors, and an assertion checker which checks application properties during simulation. To illustrate the tool enhancements, a simple automobile cruise control system is presented and analyzed.
{"title":"Tools for formal specification, verification, and validation of requirements","authors":"C. Heitmeyer, J. Kirby, B. Labaw","doi":"10.1109/CMPASS.1997.613206","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613206","url":null,"abstract":"Although formal methods for developing computer systems have been available for more than a decade, few have had significant impact in practice. A major barrier to their use is that software developers find formal methods difficult to understand and apply. One exception is a formal method called SCR for specifying computer system requirements which, due to its easy to use tabular notation and its demonstrated scalability, has already achieved some success in industry. Recently a set of software tools, including a specification editor, a consistency checker, a simulator, and a verifier has been developed to support the SCR method. This paper describes recent enhancements to the SCR tools: a new dependency graph browser which displays the dependencies among the variables in the specification, an improved consistency checker which produces detailed feedback about detected errors, and an assertion checker which checks application properties during simulation. To illustrate the tool enhancements, a simple automobile cruise control system is presented and analyzed.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115223688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613283
L. Badger
Information system security has turned out to be much more challenging than at first thought. In the 1980s a great deal of energy was expended in an attempt to create a broad market of security-enhanced systems. This market, however, did not develop, and most computer systems today include only rudimentary security mechanisms. New technologies, however, such as extensible systems and security wrappers, hold promise to reintroduce security as an effective and ubiquitous system service.
{"title":"Information security: from reference monitors to wrappers","authors":"L. Badger","doi":"10.1109/CMPASS.1997.613283","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613283","url":null,"abstract":"Information system security has turned out to be much more challenging than at first thought. In the 1980s a great deal of energy was expended in an attempt to create a broad market of security-enhanced systems. This market, however, did not develop, and most computer systems today include only rudimentary security mechanisms. New technologies, however, such as extensible systems and security wrappers, hold promise to reintroduce security as an effective and ubiquitous system service.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131624810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613291
G. A. Alvarez, F. Cristian
We address the problem of gaining assurance on the correctness of fault-tolerant and real-time distributed protocols. We validate implementations of two group membership protocols by running a centralized simulation of the distributed system, and testing whether the protocols satisfy the safety and timeliness properties prescribed by their specifications. Our testing environment performs deterministic experiments that include both normal workloads and failures injected into the execution, to test protocol behavior under failure scenarios the protocols are supposed to tolerate. The two membership protocols assume different system models, and depend on quite different sets of underlying services. Even though their specifications contain properties that cannot be evaluated accurately in a distributed platform, our testing environment overcomes this limitation. The tests performed uncovered several flaws in the implementations.
{"title":"Simulation-based test of fault-tolerant group membership services","authors":"G. A. Alvarez, F. Cristian","doi":"10.1109/CMPASS.1997.613291","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613291","url":null,"abstract":"We address the problem of gaining assurance on the correctness of fault-tolerant and real-time distributed protocols. We validate implementations of two group membership protocols by running a centralized simulation of the distributed system, and testing whether the protocols satisfy the safety and timeliness properties prescribed by their specifications. Our testing environment performs deterministic experiments that include both normal workloads and failures injected into the execution, to test protocol behavior under failure scenarios the protocols are supposed to tolerate. The two membership protocols assume different system models, and depend on quite different sets of underlying services. Even though their specifications contain properties that cannot be evaluated accurately in a distributed platform, our testing environment overcomes this limitation. The tests performed uncovered several flaws in the implementations.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132917211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613242
L. Morell, B. Murrill, Renata Rand
Error flow analysis is the study of how errors originate, spread, and propagate during program execution based on the three steps of the fault/failure model: execution, infection, and propagation. These three steps are defined relative to a virtual computer-by judiciously selecting the instruction set and data state of this computer the need for infection analysis can be reduced or eliminated in favor of execution and propagation analysis. A key aspect of propagation analysis is injecting errors into the data state and tracing their expect. Perturbation analysis injects errors by directly modifying the data state of on executing program. The resulting code that is executed-the tail code-is analyzed for its error-flow behavior. Perturbation analysis is a language-independent and efficient method of characterizing the propagation rate of each tail function, the function computed by all tail code originating at a given location. This paper defines a model for perturbation analysis, and uses the model to explain the performance of analysis techniques (e.g. statement, data flow, and mutation analysis).
{"title":"Perturbation analysis of computer programs","authors":"L. Morell, B. Murrill, Renata Rand","doi":"10.1109/CMPASS.1997.613242","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613242","url":null,"abstract":"Error flow analysis is the study of how errors originate, spread, and propagate during program execution based on the three steps of the fault/failure model: execution, infection, and propagation. These three steps are defined relative to a virtual computer-by judiciously selecting the instruction set and data state of this computer the need for infection analysis can be reduced or eliminated in favor of execution and propagation analysis. A key aspect of propagation analysis is injecting errors into the data state and tracing their expect. Perturbation analysis injects errors by directly modifying the data state of on executing program. The resulting code that is executed-the tail code-is analyzed for its error-flow behavior. Perturbation analysis is a language-independent and efficient method of characterizing the propagation rate of each tail function, the function computed by all tail code originating at a given location. This paper defines a model for perturbation analysis, and uses the model to explain the performance of analysis techniques (e.g. statement, data flow, and mutation analysis).","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125622465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-06-16DOI: 10.1109/CMPASS.1997.613273
J. Knight
Although weaknesses have been demonstrated in some security techniques (encryption, protocols, mobile code such as Java, etc.), current security technology is quite strong in many areas. Despite this, information security has proved difficult to achieve in large modern software systems. Many problems have been reported in which supposedly secure systems have been penetrated and in some cases significant damage done. In practice, it appears that many (perhaps even the majority) of serious security failures are attributable to software engineering defects in the systems experiencing the failure. The author discusses the use of wrappers which can deal with deficiencies in security and considers the software architectural approach.
{"title":"Is information security an oxymoron?","authors":"J. Knight","doi":"10.1109/CMPASS.1997.613273","DOIUrl":"https://doi.org/10.1109/CMPASS.1997.613273","url":null,"abstract":"Although weaknesses have been demonstrated in some security techniques (encryption, protocols, mobile code such as Java, etc.), current security technology is quite strong in many areas. Despite this, information security has proved difficult to achieve in large modern software systems. Many problems have been reported in which supposedly secure systems have been penetrated and in some cases significant damage done. In practice, it appears that many (perhaps even the majority) of serious security failures are attributable to software engineering defects in the systems experiencing the failure. The author discusses the use of wrappers which can deal with deficiencies in security and considers the software architectural approach.","PeriodicalId":377266,"journal":{"name":"Proceedings of COMPASS '97: 12th Annual Conference on Computer Assurance","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126774055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}