Traditionally, in fault injection-based robustness evaluation of software (specifically for operating systems - OS's), faults or errors are injected at specific code locations. This paper studies the sensitivity and accuracy of the robustness evaluation results arising from varying the timing of injecting the faults into the OS. A strategy to guide the triggering of fault injection is proposed, based on the observation that the operational usage profile of a driver shows a high degree of regularity in the calls being made. The concept of call blocks (i.e., a distinct sequence of calls made to the driver) can be used to guide injections into different system states, corresponding to the driver operations carried out. A real-world case study compares the effectiveness of the proposed strategy to traditional location-based approaches, demonstrating that significant and useful insights can be gained by modulating the injection instants.
{"title":"On the Impact of Injection Triggers for OS Robustness Evaluation","authors":"A. Johansson, N. Suri, Brendan Murphy","doi":"10.1109/ISSRE.2007.23","DOIUrl":"https://doi.org/10.1109/ISSRE.2007.23","url":null,"abstract":"Traditionally, in fault injection-based robustness evaluation of software (specifically for operating systems - OS's), faults or errors are injected at specific code locations. This paper studies the sensitivity and accuracy of the robustness evaluation results arising from varying the timing of injecting the faults into the OS. A strategy to guide the triggering of fault injection is proposed, based on the observation that the operational usage profile of a driver shows a high degree of regularity in the calls being made. The concept of call blocks (i.e., a distinct sequence of calls made to the driver) can be used to guide injections into different system states, corresponding to the driver operations carried out. A real-world case study compares the effectiveness of the proposed strategy to traditional location-based approaches, demonstrating that significant and useful insights can be gained by modulating the injection instants.","PeriodicalId":193805,"journal":{"name":"The 18th IEEE International Symposium on Software Reliability (ISSRE '07)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prediction of fault-prone modules in a software project has been the topic of many studies. In this paper, we investigate whether metrics available early in the development lifecycle can be used to identify fault-prone software modules. More precisely, we build predictive models using the metrics that characterize textual requirements. We compare the performance of requirements-based models against the performance of code-based models and models that combine requirement and code metrics. Using a range of modeling techniques and the data from three NASA projects, our study indicates that the early lifecycle metrics can play an important role in project management, either by pointing to the need for increased quality monitoring during the development or by using the models to assign verification and validation activities.
{"title":"Fault Prediction using Early Lifecycle Data","authors":"Yue Jiang, B. Cukic, T. Menzies","doi":"10.1109/ISSRE.2007.24","DOIUrl":"https://doi.org/10.1109/ISSRE.2007.24","url":null,"abstract":"The prediction of fault-prone modules in a software project has been the topic of many studies. In this paper, we investigate whether metrics available early in the development lifecycle can be used to identify fault-prone software modules. More precisely, we build predictive models using the metrics that characterize textual requirements. We compare the performance of requirements-based models against the performance of code-based models and models that combine requirement and code metrics. Using a range of modeling techniques and the data from three NASA projects, our study indicates that the early lifecycle metrics can play an important role in project management, either by pointing to the need for increased quality monitoring during the development or by using the models to assign verification and validation activities.","PeriodicalId":193805,"journal":{"name":"The 18th IEEE International Symposium on Software Reliability (ISSRE '07)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122379818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In any software project, developers need to be aware of existing dependencies and how they affect their system. We investigated the architecture and dependencies of Windows Server 2003 to show how to use the complexity of a subsystem's dependency graph to predict the number of failures at statistically significant levels. Such estimations can help to allocate software quality resources to the parts of a product that need it most, and as early as possible.
在任何软件项目中,开发人员都需要了解现有的依赖项以及它们如何影响系统。我们研究了Windows Server 2003的体系结构和依赖关系,以展示如何使用子系统依赖关系图的复杂性来预测具有统计意义的故障数量。这样的评估可以帮助尽可能早地将软件质量资源分配给产品中最需要的部分。
{"title":"Predicting Subsystem Failures using Dependency Graph Complexities","authors":"Thomas Zimmermann, Nachiappan Nagappan","doi":"10.1109/ISSRE.2007.19","DOIUrl":"https://doi.org/10.1109/ISSRE.2007.19","url":null,"abstract":"In any software project, developers need to be aware of existing dependencies and how they affect their system. We investigated the architecture and dependencies of Windows Server 2003 to show how to use the complexity of a subsystem's dependency graph to predict the number of failures at statistically significant levels. Such estimations can help to allocate software quality resources to the parts of a product that need it most, and as early as possible.","PeriodicalId":193805,"journal":{"name":"The 18th IEEE International Symposium on Software Reliability (ISSRE '07)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115832458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constraint-Based Testing (CBT) is the process of generating test cases against a testing objective by using constraint solving techniques. In CBT, testing objectives are given under the form of properties to be satisfied by program's input/output. Whenever the program or the properties contain disjunctions or multiplications between variables, CBT faces the problem of solving non-linear constraint systems. Currently, existing CBT tools tackle this problem by exploiting a finite-domains constraint solver. But, solving a non-linear constraint system over finite domains is NP hard and CBT tools fail to handle properly most properties to be tested. In this paper, we present a CBT approach where a finite domain constraint solver is enhanced by Dynamic Linear Relaxations (DLRs). DLRs are based on linear abstractions derived during the constraint solving process. They dramatically increase the solving capabilities of the solver in the presence of non-linear constraints without compromising the completeness or soundness of the overall CBT process. We implemented DLRs within the CBT tool TAUPO that generates test data for programs written in C. The approach has been validated on difficult non-linear properties over a few (academic) C programs.
{"title":"Improving Constraint-Based Testing with Dynamic Linear Relaxations","authors":"Tristan Denmat, A. Gotlieb, M. Ducassé","doi":"10.1109/ISSRE.2007.34","DOIUrl":"https://doi.org/10.1109/ISSRE.2007.34","url":null,"abstract":"Constraint-Based Testing (CBT) is the process of generating test cases against a testing objective by using constraint solving techniques. In CBT, testing objectives are given under the form of properties to be satisfied by program's input/output. Whenever the program or the properties contain disjunctions or multiplications between variables, CBT faces the problem of solving non-linear constraint systems. Currently, existing CBT tools tackle this problem by exploiting a finite-domains constraint solver. But, solving a non-linear constraint system over finite domains is NP hard and CBT tools fail to handle properly most properties to be tested. In this paper, we present a CBT approach where a finite domain constraint solver is enhanced by Dynamic Linear Relaxations (DLRs). DLRs are based on linear abstractions derived during the constraint solving process. They dramatically increase the solving capabilities of the solver in the presence of non-linear constraints without compromising the completeness or soundness of the overall CBT process. We implemented DLRs within the CBT tool TAUPO that generates test data for programs written in C. The approach has been validated on difficult non-linear properties over a few (academic) C programs.","PeriodicalId":193805,"journal":{"name":"The 18th IEEE International Symposium on Software Reliability (ISSRE '07)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127395615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the key factor in software quality, software reliability quantifies software failures. Traditional software reliability growth models use the execution time during testing for reliability estimation. Although testing time is an important factor in reliability, it is likely that the prediction accuracy of such models can be further improved by adding other parameters which affect the final software quality. Meanwhile, in software testing, test coverage has been regarded as an indicator for testing completeness and effectiveness in the literature. In this paper, we propose a novel method to integrate time and test coverage measurements together to predict the reliability. The key idea is that failure detection is not only related to the time that the software experiences under testing, but also to what fraction of the code has been executed by the testing. This is the first time that execution time and test coverage are incorporated together into one single mathematical form to estimate the reliability achieved. We further extend this method to predict the reliability of fault- tolerant software systems. The experimental results with multi-version software show that our reliability model achieves a substantial estimation improvement compared with existing reliability models.
{"title":"Software Reliability Modeling with Test Coverage: Experimentation and Measurement with A Fault-Tolerant Software Project","authors":"Xia Cai, Michael R. Lyu","doi":"10.1109/ISSRE.2007.17","DOIUrl":"https://doi.org/10.1109/ISSRE.2007.17","url":null,"abstract":"As the key factor in software quality, software reliability quantifies software failures. Traditional software reliability growth models use the execution time during testing for reliability estimation. Although testing time is an important factor in reliability, it is likely that the prediction accuracy of such models can be further improved by adding other parameters which affect the final software quality. Meanwhile, in software testing, test coverage has been regarded as an indicator for testing completeness and effectiveness in the literature. In this paper, we propose a novel method to integrate time and test coverage measurements together to predict the reliability. The key idea is that failure detection is not only related to the time that the software experiences under testing, but also to what fraction of the code has been executed by the testing. This is the first time that execution time and test coverage are incorporated together into one single mathematical form to estimate the reliability achieved. We further extend this method to predict the reliability of fault- tolerant software systems. The experimental results with multi-version software show that our reliability model achieves a substantial estimation improvement compared with existing reliability models.","PeriodicalId":193805,"journal":{"name":"The 18th IEEE International Symposium on Software Reliability (ISSRE '07)","volume":"596 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116278420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}