The prediction of software artifacts on defect‐prone (DP) or non‐defect‐prone (NDP) classes during the testing phase helps minimize software business costs, which is a classification task in software defect prediction (SDP) field. Machine learning methods are helpful for the task, although they face the challenge of data imbalance distribution. The challenge leads to serious misclassification of artifacts, which will disrupt the predictor's performance. The previously developed stacking ensemble methods do not consider the cost issue to handle the class imbalance problem (CIP) over the training dataset in the SDP field. To bridge this research gap, in the cost‐sensitive stacked generalization (CSSG) approach, we try to combine the staking ensemble learning method with cost‐sensitive learning (CSL) since the CSL purpose is to reduce misclassification costs. In the cost‐sensitive stacked generalization (CSSG) approach, logistic regression (LR) and extremely randomized trees classifiers in cases of CSL and cost‐insensitive are used as a final classifier of stacking scheme. To evaluate the performance of CSSG, we use six performance measures. Several experiments are carried out to compare the CSSG with some cost‐sensitive ensemble methods on 15 benchmark datasets with different imbalance levels. The results indicate that the CSSG can be an effective solution to the CIP than other compared methods.
{"title":"CSSG: A cost‐sensitive stacked generalization approach for software defect prediction","authors":"Z. Eivazpour, M. Keyvanpour","doi":"10.1002/stvr.1761","DOIUrl":"https://doi.org/10.1002/stvr.1761","url":null,"abstract":"The prediction of software artifacts on defect‐prone (DP) or non‐defect‐prone (NDP) classes during the testing phase helps minimize software business costs, which is a classification task in software defect prediction (SDP) field. Machine learning methods are helpful for the task, although they face the challenge of data imbalance distribution. The challenge leads to serious misclassification of artifacts, which will disrupt the predictor's performance. The previously developed stacking ensemble methods do not consider the cost issue to handle the class imbalance problem (CIP) over the training dataset in the SDP field. To bridge this research gap, in the cost‐sensitive stacked generalization (CSSG) approach, we try to combine the staking ensemble learning method with cost‐sensitive learning (CSL) since the CSL purpose is to reduce misclassification costs. In the cost‐sensitive stacked generalization (CSSG) approach, logistic regression (LR) and extremely randomized trees classifiers in cases of CSL and cost‐insensitive are used as a final classifier of stacking scheme. To evaluate the performance of CSSG, we use six performance measures. Several experiments are carried out to compare the CSSG with some cost‐sensitive ensemble methods on 15 benchmark datasets with different imbalance levels. The results indicate that the CSSG can be an effective solution to the CIP than other compared methods.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"89 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83876252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a generalization of the exponential software reliability model to characterize several factors including fault introduction and time‐varying fault detection rate. The software life cycle is designed based on module structure such as testing effort spent during module testing and detected software faults etc. The resource allocation problem is a critical phase in the testing stage of software reliability modelling. It is required to make decisions for optimal resource allocation among the modules to achieve the desired level of reliability. We formulate a multi‐objective software reliability model of testing resources for a new generalized exponential reliability function to characterizes dynamic allocation of total expected cost and testing effort. An enhanced particle swarm optimization (EPSO) is proposed to maximize software reliability and minimize allocation cost. We perform experiments with randomly generated testing‐resource sets and varying the performance using the entropy function. The multi‐objective model is compared with modules according to weighted cost function and testing effort measures in a typical modular testing environment.
{"title":"Entropy based enhanced particle swarm optimization on multi‐objective software reliability modelling for optimal testing resources allocation","authors":"P. Rani, G. Mahapatra","doi":"10.1002/stvr.1765","DOIUrl":"https://doi.org/10.1002/stvr.1765","url":null,"abstract":"This paper proposes a generalization of the exponential software reliability model to characterize several factors including fault introduction and time‐varying fault detection rate. The software life cycle is designed based on module structure such as testing effort spent during module testing and detected software faults etc. The resource allocation problem is a critical phase in the testing stage of software reliability modelling. It is required to make decisions for optimal resource allocation among the modules to achieve the desired level of reliability. We formulate a multi‐objective software reliability model of testing resources for a new generalized exponential reliability function to characterizes dynamic allocation of total expected cost and testing effort. An enhanced particle swarm optimization (EPSO) is proposed to maximize software reliability and minimize allocation cost. We perform experiments with randomly generated testing‐resource sets and varying the performance using the entropy function. The multi‐objective model is compared with modules according to weighted cost function and testing effort measures in a typical modular testing environment.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"137 3 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91098177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Hu, Ji Wan, Kan Luo, Yuzhuang Xu, Zijing Cheng, W. Tsai
This paper proposes an algebraic system, verification algebra (VA), for reducing the number of component combinations to be verified in multi‐tenant architecture (MTA). MTA is a design architecture used in SaaS (Software‐as‐a‐Service) where a tenant can customize its applications by integrating services already stored in the SaaS databases or newly supplied services. Similar to SaaS, VaaS (Verification‐as‐a‐Service) is a verification service in a cloud that leverages the computing power offered by a cloud environment with automated provisioning, scalability and service composition. In VaaS architecture, however, there is a challenging problem called ‘combinatorial explosion’ that it is difficult to verify a large number of compositions constructed by both quantities of components and various combination structures even with computing resources in cloud. This paper proposes rules to emerge combinations status for future verification, on the basis of the existing results. Both composition patterns and properties are considered and analysed in VA rules.
为了减少多租户体系结构(MTA)中需要验证的组件组合的数量,本文提出了一个代数系统——验证代数(VA)。MTA是一种用于SaaS (Software - as - a - Service)的设计架构,租户可以通过集成已经存储在SaaS数据库中的服务或新提供的服务来定制其应用程序。与SaaS类似,VaaS (Verification - as - a - Service)是一种云中的验证服务,它利用云环境提供的自动配置、可扩展性和服务组合的计算能力。然而,在VaaS体系结构中,存在一个具有挑战性的问题,即“组合爆炸”,即即使使用云计算资源,也难以验证由大量组件和各种组合结构构成的大量组合。本文在已有结果的基础上,提出了组合状态显现规则,以备将来验证。在VA规则中考虑和分析了组合模式和性能。
{"title":"Verification algebra for multi‐tenant applications in VaaS architecture","authors":"Kai Hu, Ji Wan, Kan Luo, Yuzhuang Xu, Zijing Cheng, W. Tsai","doi":"10.1002/stvr.1763","DOIUrl":"https://doi.org/10.1002/stvr.1763","url":null,"abstract":"This paper proposes an algebraic system, verification algebra (VA), for reducing the number of component combinations to be verified in multi‐tenant architecture (MTA). MTA is a design architecture used in SaaS (Software‐as‐a‐Service) where a tenant can customize its applications by integrating services already stored in the SaaS databases or newly supplied services. Similar to SaaS, VaaS (Verification‐as‐a‐Service) is a verification service in a cloud that leverages the computing power offered by a cloud environment with automated provisioning, scalability and service composition. In VaaS architecture, however, there is a challenging problem called ‘combinatorial explosion’ that it is difficult to verify a large number of compositions constructed by both quantities of components and various combination structures even with computing resources in cloud. This paper proposes rules to emerge combinations status for future verification, on the basis of the existing results. Both composition patterns and properties are considered and analysed in VA rules.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"30 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83537980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to the reachability–infection–propagation (RIP) model, three conditions must be satisfied for program failure to occur: (1) the defect's location must be reached, (2) the program's state must become infected and (3) the infection must propagate to the output. Weak coincidental correctness (or weak CC) occurs when the program produces the correct output, while condition (1) is satisfied but conditions (2) and (3) are not satisfied. Strong coincidental correctness (or strong CC) occurs when the output is correct, while both conditions (1) and (2) are satisfied but not (3). The prevalence of CC was previously recognized. In addition, the potential for its negative effect on spectrum‐based fault localization (SBFL) was analytically demonstrated; however, this was not empirically validated. Using Defects4J, this paper empirically studies the impact of weak and strong CC on three well‐researched coverage‐based fault detection and localization techniques, namely, test suite reduction (TSR), test case prioritization (TCP) and SBFL. Our study, which involved 52 SBFL metrics, provides the following empirical evidence. (i) The negative impact of CC tests on TSR and TCP is very significant. In addition, cleansing the CC tests was observed to yield (a) a 100% TSR defect detection rate for all subject programs and (b) an improvement of TCP for over 92% of the subjects. (ii) The impact of CC tests on SBFL varies widely w.r.t. the metric used. The negative impact was strong for 11 metrics, mild for 37, non‐measurable for 1 and non‐existent for 3 metrics. Interestingly, the negative impact was mild for the 9 most popular and/or most effective SBFL metrics. In addition, cleansing the CC tests resulted in the deterioration of SBFL for a considerable number of subject programs. (iii) Increasing the proportion of CC tests has a limited impact on TSR, TCP and SBFL. Interestingly, for TSR and TCP and 11 SBFL metrics, small and large proportions of CC tests are strongly harmful. (iv) Lastly, weak and strong CC are equally detrimental in the context of TSR, TCP and SBFL.
{"title":"How detrimental is coincidental correctness to coverage‐based fault detection and localization? An empirical study","authors":"R. A. Assi, Wes Masri, Chadi Trad","doi":"10.1002/stvr.1762","DOIUrl":"https://doi.org/10.1002/stvr.1762","url":null,"abstract":"According to the reachability–infection–propagation (RIP) model, three conditions must be satisfied for program failure to occur: (1) the defect's location must be reached, (2) the program's state must become infected and (3) the infection must propagate to the output. Weak coincidental correctness (or weak CC) occurs when the program produces the correct output, while condition (1) is satisfied but conditions (2) and (3) are not satisfied. Strong coincidental correctness (or strong CC) occurs when the output is correct, while both conditions (1) and (2) are satisfied but not (3). The prevalence of CC was previously recognized. In addition, the potential for its negative effect on spectrum‐based fault localization (SBFL) was analytically demonstrated; however, this was not empirically validated. Using Defects4J, this paper empirically studies the impact of weak and strong CC on three well‐researched coverage‐based fault detection and localization techniques, namely, test suite reduction (TSR), test case prioritization (TCP) and SBFL. Our study, which involved 52 SBFL metrics, provides the following empirical evidence. (i) The negative impact of CC tests on TSR and TCP is very significant. In addition, cleansing the CC tests was observed to yield (a) a 100% TSR defect detection rate for all subject programs and (b) an improvement of TCP for over 92% of the subjects. (ii) The impact of CC tests on SBFL varies widely w.r.t. the metric used. The negative impact was strong for 11 metrics, mild for 37, non‐measurable for 1 and non‐existent for 3 metrics. Interestingly, the negative impact was mild for the 9 most popular and/or most effective SBFL metrics. In addition, cleansing the CC tests resulted in the deterioration of SBFL for a considerable number of subject programs. (iii) Increasing the proportion of CC tests has a limited impact on TSR, TCP and SBFL. Interestingly, for TSR and TCP and 11 SBFL metrics, small and large proportions of CC tests are strongly harmful. (iv) Lastly, weak and strong CC are equally detrimental in the context of TSR, TCP and SBFL.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"44 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85100701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This special issue contains extended versions of two papers from the 11th IEEE International Conference on Software Testing, Validation & Verification (ICST 2018). ICST strives to provide an open forum for researchers, scientists, engineers, and practitioners working on software testing to present and discuss the latest research findings, ideas and developments. This was no exception at ICST 2018, where the collocation with the quarterly meeting of Swedish Association of Software Testing (SAST) led to strong industry presence and cross pollination of ideas. Based on the reviews from the program committee members, as well as the discussion with the Editors-in-Chief about their relevance to STVR, we invited authors of three papers to extend their papers and submit to the special section. The submitted manuscript went through a rigorous reviewing process by a panel of experts that included, but was not limited to, the members of the ICST 2018 program committee. During the process, two of the three papers were handled by one of us (Prof. Feldt) due to conflicts of interest. After finalizing the review and revision process two papers were accepted for publication in this special issue. The first paper, “DeMiner: Test Generation for High Test Coverage through Mutant Exploration” by Shin Hong, Yunho Kim, and Moonzoo Kim, introduces a way of using mutation analysis to improve the code coverage achieved by concolic testing. Called “Invasive Software Testing”, the proposed technique exploits additional information about the concrete and dynamic behaviour of the System Under Test, via mutation testing, to set goalposts for improved concolic testing. The second paper, “Effective Repair of Internationalization Presentation Failures in Web Applications Using Style Similarity Clustering and Search-Based Techniques” by Sonal Mahajan, Abdulmajeed Alameer, Phil McMinn, and William Halfond, introduces an automated repair technique for presentation failures in web pages regarding internationalization. The proposed technique, IFix+, can address presentation failures that stem from interactions of multiple style components by clustering DOM elements based on their visual appearances. Subsequently, IFix+ uses a search based approach to find a CSS patch that avoids introduction of any new failures. We express our thanks to everyone who contributed to the success of ICST 2018, as well as to this special section. We are grateful to the authors for extending their paper and submitting their valuable work to STVR. We would also like to express our gratitude to everyone who worked hard to make ICST 2018 the great success it was: in the midst of a global pandemic, the memory of the conference is something we all cherish deeply. In particular, we thank the general chair, Hans Hansson. Finally, we would like to thank both Robert Hierons and Tao Xie, for their support and guidance for this special issue.
{"title":"Special Issue: IEEE International Conference on Software Testing, Validation & Verification 2018","authors":"R. Feldt, S. Yoo","doi":"10.1002/stvr.1764","DOIUrl":"https://doi.org/10.1002/stvr.1764","url":null,"abstract":"This special issue contains extended versions of two papers from the 11th IEEE International Conference on Software Testing, Validation & Verification (ICST 2018). ICST strives to provide an open forum for researchers, scientists, engineers, and practitioners working on software testing to present and discuss the latest research findings, ideas and developments. This was no exception at ICST 2018, where the collocation with the quarterly meeting of Swedish Association of Software Testing (SAST) led to strong industry presence and cross pollination of ideas. Based on the reviews from the program committee members, as well as the discussion with the Editors-in-Chief about their relevance to STVR, we invited authors of three papers to extend their papers and submit to the special section. The submitted manuscript went through a rigorous reviewing process by a panel of experts that included, but was not limited to, the members of the ICST 2018 program committee. During the process, two of the three papers were handled by one of us (Prof. Feldt) due to conflicts of interest. After finalizing the review and revision process two papers were accepted for publication in this special issue. The first paper, “DeMiner: Test Generation for High Test Coverage through Mutant Exploration” by Shin Hong, Yunho Kim, and Moonzoo Kim, introduces a way of using mutation analysis to improve the code coverage achieved by concolic testing. Called “Invasive Software Testing”, the proposed technique exploits additional information about the concrete and dynamic behaviour of the System Under Test, via mutation testing, to set goalposts for improved concolic testing. The second paper, “Effective Repair of Internationalization Presentation Failures in Web Applications Using Style Similarity Clustering and Search-Based Techniques” by Sonal Mahajan, Abdulmajeed Alameer, Phil McMinn, and William Halfond, introduces an automated repair technique for presentation failures in web pages regarding internationalization. The proposed technique, IFix+, can address presentation failures that stem from interactions of multiple style components by clustering DOM elements based on their visual appearances. Subsequently, IFix+ uses a search based approach to find a CSS patch that avoids introduction of any new failures. We express our thanks to everyone who contributed to the success of ICST 2018, as well as to this special section. We are grateful to the authors for extending their paper and submitting their valuable work to STVR. We would also like to express our gratitude to everyone who worked hard to make ICST 2018 the great success it was: in the midst of a global pandemic, the memory of the conference is something we all cherish deeply. In particular, we thank the general chair, Hans Hansson. Finally, we would like to thank both Robert Hierons and Tao Xie, for their support and guidance for this special issue.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"402 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84852301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 10th International Conference on Software Testing, Verification, and Validation (ICST 2017) was held on March 13 to 17, 2017, in Tokyo, Japan. The aim of the ICST conference is to bring together researchers and practitioners who study the theory, techniques, technologies, and applications that concern all aspects of software testing, verification, and validation of software systems. In the main research program, the ICST 2017 program chairs, Ina Schieferdecker and Hironori Washizaki selected 36 full papers and 8 short papers for inclusion in the proceedings from among 135 submissions based on the recommendation of the program committee. All papers were refereed by at least three program committee members. Of the 36 full papers accepted, we selected seven papers for consideration for this special issue of STVR. These papers were extended from their conference version by the authors and were reviewed according to the standard STVR reviewing process. We thank all the ICST and STVR reviewers for their hard work. Three papers successfully completed the review process and are contained in this special issue. The rest of this editorial provides a brief overview of these three papers. The first paper ‐ “Choosing The Fitness Function for the Job: Automated Generation of Test Suites that Detect Real Faults” by Alireza Salahirad, Hussein Almulla, and Gregory Gay ‐ studies the effectiveness of different fitness functions in search‐based unit test generation for detecting faults. Experiment results on real faults from the Defects4J database reveal that the branch coverage fitness function is the most effective. The study also reveals that the most important factor related to the likelihood of detection is the satisfaction of the chosen criterion’s test obligations. The second paper ‐ “Complexity Vulnerability Analysis using Symbolic Execution” ‐ Kasper Luckow, Rody Kersten, and Corina Pasareanu ‐ presents a symbolic execution technique for analyzing the worst‐case complexity of programs. The technique uses path policies to guide the symbolic execution towards worst‐case paths. The evaluation shows that the technique can detect complexity vulnerabilities in realistic software as well as standard implementations of classic algorithms. The third paper ‐ “Model‐based Testing of Apache ZooKeeper: Fundamental API Usage and Watchers” by Cyrille Artho, Kazuaki Banzai, Quentin Gros, Guillaume Rousset, Lei Ma, Takashi Kitamura, Masami Hagiya, Yoshinori Tanabe, and Mitsuharu Yamamoto ‐ presents a model‐based testing technique to generate test cases for concurrent client sessions executing against ZooKeeper. The technique defines the semantics of watchers in ZooKeeper and the test oracle to handle the chain of events that eventually leads to the watcher being triggered. The evaluation shows that the technique can detect known defects as well as seeded mutations that implement possible flaws.
{"title":"Editorial for the special issue of STVR on the 10th IEEE International Conference on Software Testing, Verification, and Validation (ICST 2017)","authors":"I. Schieferdecker, A. Memon, H. Washizaki","doi":"10.1002/stvr.1757","DOIUrl":"https://doi.org/10.1002/stvr.1757","url":null,"abstract":"The 10th International Conference on Software Testing, Verification, and Validation (ICST 2017) was held on March 13 to 17, 2017, in Tokyo, Japan. The aim of the ICST conference is to bring together researchers and practitioners who study the theory, techniques, technologies, and applications that concern all aspects of software testing, verification, and validation of software systems. In the main research program, the ICST 2017 program chairs, Ina Schieferdecker and Hironori Washizaki selected 36 full papers and 8 short papers for inclusion in the proceedings from among 135 submissions based on the recommendation of the program committee. All papers were refereed by at least three program committee members. Of the 36 full papers accepted, we selected seven papers for consideration for this special issue of STVR. These papers were extended from their conference version by the authors and were reviewed according to the standard STVR reviewing process. We thank all the ICST and STVR reviewers for their hard work. Three papers successfully completed the review process and are contained in this special issue. The rest of this editorial provides a brief overview of these three papers. The first paper ‐ “Choosing The Fitness Function for the Job: Automated Generation of Test Suites that Detect Real Faults” by Alireza Salahirad, Hussein Almulla, and Gregory Gay ‐ studies the effectiveness of different fitness functions in search‐based unit test generation for detecting faults. Experiment results on real faults from the Defects4J database reveal that the branch coverage fitness function is the most effective. The study also reveals that the most important factor related to the likelihood of detection is the satisfaction of the chosen criterion’s test obligations. The second paper ‐ “Complexity Vulnerability Analysis using Symbolic Execution” ‐ Kasper Luckow, Rody Kersten, and Corina Pasareanu ‐ presents a symbolic execution technique for analyzing the worst‐case complexity of programs. The technique uses path policies to guide the symbolic execution towards worst‐case paths. The evaluation shows that the technique can detect complexity vulnerabilities in realistic software as well as standard implementations of classic algorithms. The third paper ‐ “Model‐based Testing of Apache ZooKeeper: Fundamental API Usage and Watchers” by Cyrille Artho, Kazuaki Banzai, Quentin Gros, Guillaume Rousset, Lei Ma, Takashi Kitamura, Masami Hagiya, Yoshinori Tanabe, and Mitsuharu Yamamoto ‐ presents a model‐based testing technique to generate test cases for concurrent client sessions executing against ZooKeeper. The technique defines the semantics of watchers in ZooKeeper and the test oracle to handle the chain of events that eventually leads to the watcher being triggered. The evaluation shows that the technique can detect known defects as well as seeded mutations that implement possible flaws.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"11 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2020-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85264577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyrille Artho, Kazuaki Banzai, Quentin Gros, Guillaume Rousset, L. Ma, Takashi Kitamura, M. Hagiya, Yoshinori Tanabe, M. Yamamoto
In this paper, we extend work on model‐based testing for Apache ZooKeeper, to handle watchers (triggers) and improve scalability. In a distributed asynchronous shared storage like ZooKeeper, watchers deliver notifications on state changes. They are difficult to test because watcher notifications involve an initial action that sets the watcher, followed by another action that changes the previously seen state.
{"title":"Model‐based testing of Apache ZooKeeper: Fundamental API usage and watchers","authors":"Cyrille Artho, Kazuaki Banzai, Quentin Gros, Guillaume Rousset, L. Ma, Takashi Kitamura, M. Hagiya, Yoshinori Tanabe, M. Yamamoto","doi":"10.1002/stvr.1720","DOIUrl":"https://doi.org/10.1002/stvr.1720","url":null,"abstract":"In this paper, we extend work on model‐based testing for Apache ZooKeeper, to handle watchers (triggers) and improve scalability. In a distributed asynchronous shared storage like ZooKeeper, watchers deliver notifications on state changes. They are difficult to test because watcher notifications involve an initial action that sets the watcher, followed by another action that changes the previously seen state.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"134 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80462204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}