Fault localization (FL) is an important but challenging task during software testing. Among techniques studied in this field, using program spectrum as a bug indicator is a promising approach. However, its effectiveness may be affected when multiple simultaneous faults are present. To alleviate this limitation, we propose a novel approach, CLPS-MFL, which combines concept lattice with program spectrum to localize multiple faults. Our approach first uses formal concept analysis to transform the obtained program spectrum into a concept lattice. Then, it uses three strategies to identify the failure root causes based on the properties of the concept lattice. Our empirical studies on three subject programs validate the effectiveness of the CLPS-MFL approach for localizing multiple faults.
{"title":"CLPS-MFL: Using Concept Lattice of Program Spectrum for Effective Multi-fault Localization","authors":"Xiaobing Sun, Bixin Li, Wanzhi Wen","doi":"10.1109/QSIC.2013.66","DOIUrl":"https://doi.org/10.1109/QSIC.2013.66","url":null,"abstract":"Fault localization (FL) is an important but challenging task during software testing. Among techniques studied in this field, using program spectrum as a bug indicator is a promising approach. However, its effectiveness may be affected when multiple simultaneous faults are present. To alleviate this limitation, we propose a novel approach, CLPS-MFL, which combines concept lattice with program spectrum to localize multiple faults. Our approach first uses formal concept analysis to transform the obtained program spectrum into a concept lattice. Then, it uses three strategies to identify the failure root causes based on the properties of the concept lattice. Our empirical studies on three subject programs validate the effectiveness of the CLPS-MFL approach for localizing multiple faults.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125742256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In software testing, a test oracle refers to the mechanism for determining whether the results of the software under test agree with the expected outcomes. To achieve this, we need a means to determine the expected outcomes, a means to gauge the actual results, and a means to decide whether the actual results agree with the expected outcomes. In real-life situations, however, a test oracle may not exist owing to a missing link in any of these aspects. In this paper, we summarize our research for the last 15 years on selected issues related to each of these aspects. We present the use of metamorphic testing, pattern classification, and formal object equivalence and nonequivalence to alleviate the problems.
{"title":"Oracles Are Hardly Attain'd, and Hardly Understood: Confessions of Software Testing Researchers","authors":"W. Chan, T. Tse","doi":"10.1109/QSIC.2013.16","DOIUrl":"https://doi.org/10.1109/QSIC.2013.16","url":null,"abstract":"In software testing, a test oracle refers to the mechanism for determining whether the results of the software under test agree with the expected outcomes. To achieve this, we need a means to determine the expected outcomes, a means to gauge the actual results, and a means to decide whether the actual results agree with the expected outcomes. In real-life situations, however, a test oracle may not exist owing to a missing link in any of these aspects. In this paper, we summarize our research for the last 15 years on selected issues related to each of these aspects. We present the use of metamorphic testing, pattern classification, and formal object equivalence and nonequivalence to alleviate the problems.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131613301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With recent development of Component-Based Software Engineering (CBSE), the importance of predicting the non-functional properties, such as performance and reliability, has been widely acknowledged. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independently from their later context to enable reuse. Thus, non-functional properties of components need to be specified in the abstract level of architecture. In this paper, we explore the possibility of supporting reliability modeling and analysis for component-based software architecture simultaneously by an XML-based approach. The contribution of this paper is twofold: first we present an extension of xADL 3.0 that enables the support for reliability modeling of software architectures, based on this extension, we propose a method for generation of analysis-oriented models for reliability prediction. We demonstrate the applicability of our approach by modeling an example and conducting reliability prediction.
{"title":"Supporting Reliability Modeling and Analysis for Component-Based Software Architecture: An XML-Based Approach","authors":"Weichao Luo, Linpeng Huang","doi":"10.1109/QSIC.2013.39","DOIUrl":"https://doi.org/10.1109/QSIC.2013.39","url":null,"abstract":"With recent development of Component-Based Software Engineering (CBSE), the importance of predicting the non-functional properties, such as performance and reliability, has been widely acknowledged. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independently from their later context to enable reuse. Thus, non-functional properties of components need to be specified in the abstract level of architecture. In this paper, we explore the possibility of supporting reliability modeling and analysis for component-based software architecture simultaneously by an XML-based approach. The contribution of this paper is twofold: first we present an extension of xADL 3.0 that enables the support for reliability modeling of software architectures, based on this extension, we propose a method for generation of analysis-oriented models for reliability prediction. We demonstrate the applicability of our approach by modeling an example and conducting reliability prediction.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115837867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As chip densities and clock rates increases, processors are becoming more susceptible to transient faults that affect program correctness. Therefore, fault tolerance becomes increasingly important in computing system. Two major concerns of fault tolerance techniques are: a) improving system reliability by detecting transient errors and b) reducing performance overhead. In this study, we propose a configurable fault tolerance technique targeting both high reliability and low performance overhead for multi-media applications. The basic principle is applying different levels of fault tolerance configurability, which means that different degrees of fault tolerance are applied to different parts of the source codes in multi-media applications. First, a primary analysis is performed on the source code level to classify the critical statements. Second, a fault injection process combined with a statistical analysis is used to assure the partition with regards to a confidence degree. Finally, checksum-based fault tolerance and instruction duplication are applied to critical statements, while no fault tolerance mechanism is applied to non-critical parts. Performance experiment results demonstrate that our configurable fault tolerance technique can lead to significant performance gains compared with duplicating all instructions. The fault coverage of this scheme is also evaluated. Fault injection results show that about 90% of outputs are application-level correctness with just 20% of runtime overhead.
{"title":"A Low-Cost Fault Tolerance Technique in Multi-media Applications through Configurability","authors":"Lanfang Tan, Ying Tan","doi":"10.1109/QSIC.2013.25","DOIUrl":"https://doi.org/10.1109/QSIC.2013.25","url":null,"abstract":"As chip densities and clock rates increases, processors are becoming more susceptible to transient faults that affect program correctness. Therefore, fault tolerance becomes increasingly important in computing system. Two major concerns of fault tolerance techniques are: a) improving system reliability by detecting transient errors and b) reducing performance overhead. In this study, we propose a configurable fault tolerance technique targeting both high reliability and low performance overhead for multi-media applications. The basic principle is applying different levels of fault tolerance configurability, which means that different degrees of fault tolerance are applied to different parts of the source codes in multi-media applications. First, a primary analysis is performed on the source code level to classify the critical statements. Second, a fault injection process combined with a statistical analysis is used to assure the partition with regards to a confidence degree. Finally, checksum-based fault tolerance and instruction duplication are applied to critical statements, while no fault tolerance mechanism is applied to non-critical parts. Performance experiment results demonstrate that our configurable fault tolerance technique can lead to significant performance gains compared with duplicating all instructions. The fault coverage of this scheme is also evaluated. Fault injection results show that about 90% of outputs are application-level correctness with just 20% of runtime overhead.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127184484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichao Gao, Zhenyu Zhang, Long Zhang, Cheng Gong, Zheng Zheng
Statistical fault localization techniques analyze the dynamic program information provided by executing a large number of test cases to predict fault positions in faulty programs. Related studies show that the extent of imbalance between the number of passed test cases and that of failed test cases may reduce the effectiveness of such techniques, while failed test cases can frequently be less than passed test cases in practice. In this study, we propose a strategy to generate balanced test suite by cloning the failed test cases for suitable number of times to catch up with the number of passed test cases. We further give an analysis to show that by carrying out the cloning the effectiveness of two representative fault localization techniques can be improved under certain conditions and impaired at no time.
{"title":"A Theoretical Study: The Impact of Cloning Failed Test Cases on the Effectiveness of Fault Localization","authors":"Yichao Gao, Zhenyu Zhang, Long Zhang, Cheng Gong, Zheng Zheng","doi":"10.1109/QSIC.2013.23","DOIUrl":"https://doi.org/10.1109/QSIC.2013.23","url":null,"abstract":"Statistical fault localization techniques analyze the dynamic program information provided by executing a large number of test cases to predict fault positions in faulty programs. Related studies show that the extent of imbalance between the number of passed test cases and that of failed test cases may reduce the effectiveness of such techniques, while failed test cases can frequently be less than passed test cases in practice. In this study, we propose a strategy to generate balanced test suite by cloning the failed test cases for suitable number of times to catch up with the number of passed test cases. We further give an analysis to show that by carrying out the cloning the effectiveness of two representative fault localization techniques can be improved under certain conditions and impaired at no time.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126128717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although past research has resulted in different means to deal with software model quality, creation of adequate software models remains challenging. Any modelling effort must be carefully analysed and planned before it starts, and definition or adoption of modelling guidelines is usually necessary. In addition, the amount of publications addressing model quality in practice is low, and the knowledge about others' experience regarding model quality is limited. This paper reports on our experience in dealing with software model quality in the context of a project between industry and academia. Such a project corresponds to a large-scale research project in which modelling has been used both as part of the necessary work for executing the project and for creating project results. We present how we have dealt with model quality in requirements modelling and in conceptual model specification, as well as a set of lessons learned. The insights provided can help both researchers and practitioners when having to deal with software model quality.
{"title":"Dealing with Software Model Quality in Practice: Experience in a Research Project","authors":"J. Vara, H. Espinoza","doi":"10.1109/QSIC.2013.40","DOIUrl":"https://doi.org/10.1109/QSIC.2013.40","url":null,"abstract":"Although past research has resulted in different means to deal with software model quality, creation of adequate software models remains challenging. Any modelling effort must be carefully analysed and planned before it starts, and definition or adoption of modelling guidelines is usually necessary. In addition, the amount of publications addressing model quality in practice is low, and the knowledge about others' experience regarding model quality is limited. This paper reports on our experience in dealing with software model quality in the context of a project between industry and academia. Such a project corresponds to a large-scale research project in which modelling has been used both as part of the necessary work for executing the project and for creating project results. We present how we have dealt with model quality in requirements modelling and in conceptual model specification, as well as a set of lessons learned. The insights provided can help both researchers and practitioners when having to deal with software model quality.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"14 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123693568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper conducts a pilot study on the energy efficiency in software regression testing. Existing techniques that harness the adjustment of CPU frequencies using Dynamic Voltage and Frequency Scaling can be classified into two categories: general and application-specific. However, existing general techniques ignore execution characteristics and existing application-specific techniques require execution profiling. We propose two non-intrusive algorithms (Case Majority and Case Optimal), which exploit an insight on regression test cases to assure efficiency in modified program versions. We conduct experimentation on three medium-size real-world benchmarks over a cycle-accurate power simulator. The empirical results show that applying our proposed techniques in the context of regression testing can effectively save more energy on one benchmark, and does not suffer from lower performance on the other two benchmarks.
{"title":"Energy Efficiency in Testing and Regression Testing -- A Comparison of DVFS Techniques","authors":"Edward Y. Y. Kan","doi":"10.1109/QSIC.2013.21","DOIUrl":"https://doi.org/10.1109/QSIC.2013.21","url":null,"abstract":"This paper conducts a pilot study on the energy efficiency in software regression testing. Existing techniques that harness the adjustment of CPU frequencies using Dynamic Voltage and Frequency Scaling can be classified into two categories: general and application-specific. However, existing general techniques ignore execution characteristics and existing application-specific techniques require execution profiling. We propose two non-intrusive algorithms (Case Majority and Case Optimal), which exploit an insight on regression test cases to assure efficiency in modified program versions. We conduct experimentation on three medium-size real-world benchmarks over a cycle-accurate power simulator. The empirical results show that applying our proposed techniques in the context of regression testing can effectively save more energy on one benchmark, and does not suffer from lower performance on the other two benchmarks.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121481404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A recent promising technique for fault localization, Backward-Slice-based Statistical Fault Localization (BSSFL), statistically analyzes the backward slices and results of a set of test cases to evaluate the suspiciousness of a statement being faulty. However, BSSFL like many existing fault localization approaches assumes the existence of a test oracle to determine whether the result of a test case is a failure or pass. In reality, test oracles do not always exist, and therefore in such cases BSSFL can be severely infeasible. Among current research, metamorphic testing has been widely studied as a technique to alleviate the oracle problem. Hence, we leverage metamorphic testing to conduct BSSFL without test oracles. With metamorphic testing, our approach uses the backward slices and the metamorphic result of violation or non-violation for a metamorphic test group, rather than the backward slice and the result of failure or pass for an individual test case in BSSFL. Because our approach does not need the execution result of a test case, it implies that BSSFL can be extended to those application domains where no test oracle exists. The experimental results on 8 programs and 2 groups of the maximal suspiciousness evaluation formulas show that our proposed approach demonstrates the effectiveness comparable to that of existing BSSFL techniques in the cases where test oracles exist.
{"title":"Backward-Slice-Based Statistical Fault Localization without Test Oracles","authors":"Yan Lei, Xiaoguang Mao, T. Chen","doi":"10.1109/QSIC.2013.45","DOIUrl":"https://doi.org/10.1109/QSIC.2013.45","url":null,"abstract":"A recent promising technique for fault localization, Backward-Slice-based Statistical Fault Localization (BSSFL), statistically analyzes the backward slices and results of a set of test cases to evaluate the suspiciousness of a statement being faulty. However, BSSFL like many existing fault localization approaches assumes the existence of a test oracle to determine whether the result of a test case is a failure or pass. In reality, test oracles do not always exist, and therefore in such cases BSSFL can be severely infeasible. Among current research, metamorphic testing has been widely studied as a technique to alleviate the oracle problem. Hence, we leverage metamorphic testing to conduct BSSFL without test oracles. With metamorphic testing, our approach uses the backward slices and the metamorphic result of violation or non-violation for a metamorphic test group, rather than the backward slice and the result of failure or pass for an individual test case in BSSFL. Because our approach does not need the execution result of a test case, it implies that BSSFL can be extended to those application domains where no test oracle exists. The experimental results on 8 programs and 2 groups of the maximal suspiciousness evaluation formulas show that our proposed approach demonstrates the effectiveness comparable to that of existing BSSFL techniques in the cases where test oracles exist.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130361681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many real-world multithreaded programs contain deadlock bugs. These bugs should be detected and corrected. Many existing detection strategies are not consistently scalable to handle large-scale applications. Many existing dynamic confirmation strategies may not reveal detectable deadlocks with high probability. And many existing runtime deadlock-tolerant strategies may incur high runtime overhead and may not prevent the same deadlock from re-occurring. This paper presents the current progress of our project on dynamic deadlock detection, confirmation, and resolution. It also describes a test harness framework developed to support our proposed approach.
{"title":"Taming Deadlocks in Multithreaded Programs","authors":"Yan Cai, W. Chan, Yuen-Tak Yu","doi":"10.1109/QSIC.2013.20","DOIUrl":"https://doi.org/10.1109/QSIC.2013.20","url":null,"abstract":"Many real-world multithreaded programs contain deadlock bugs. These bugs should be detected and corrected. Many existing detection strategies are not consistently scalable to handle large-scale applications. Many existing dynamic confirmation strategies may not reveal detectable deadlocks with high probability. And many existing runtime deadlock-tolerant strategies may incur high runtime overhead and may not prevent the same deadlock from re-occurring. This paper presents the current progress of our project on dynamic deadlock detection, confirmation, and resolution. It also describes a test harness framework developed to support our proposed approach.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133363607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spectrum-based fault localization (SBFL) uses the execution results of test cases to debug. There are two types of SBFL techniques: one using conventional slices, and the other using metamorphic slices. This paper investigates the ratio between non-violated and violated metamorphic test groups of test suites for SBFL techniques using metamorphic slices. We have observed that the higher the ratio of passed metamorphic test groups to failed metamorphic test groups, the less effective the SBFL techniques using metamorphic slices. This observation is consistent with what has been observed in SBFL techniques using conventional slices. Besides, a new real-life fault in schedule2 of Siemens Suite is identified in our experiments.
{"title":"Impacts of Test Suite's Class Imbalance on Spectrum-Based Fault Localization Techniques","authors":"P. Rao, Zheng Zheng, T. Chen, Nan Wang, K. Cai","doi":"10.1109/QSIC.2013.18","DOIUrl":"https://doi.org/10.1109/QSIC.2013.18","url":null,"abstract":"Spectrum-based fault localization (SBFL) uses the execution results of test cases to debug. There are two types of SBFL techniques: one using conventional slices, and the other using metamorphic slices. This paper investigates the ratio between non-violated and violated metamorphic test groups of test suites for SBFL techniques using metamorphic slices. We have observed that the higher the ratio of passed metamorphic test groups to failed metamorphic test groups, the less effective the SBFL techniques using metamorphic slices. This observation is consistent with what has been observed in SBFL techniques using conventional slices. Besides, a new real-life fault in schedule2 of Siemens Suite is identified in our experiments.","PeriodicalId":404921,"journal":{"name":"2013 13th International Conference on Quality Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130541682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}