Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00014
Markus Borg
This talk shares lessons learned from using search-based techniques for robustness testing in simulators.
本次演讲将分享在模拟器中使用基于搜索的技术进行鲁棒性测试的经验教训。
{"title":"Using Search-Based Software Testing to Guide the Strive for Robust Machine Learning Components: Lessons Learned Across Systems and Simulators in the Mobility Domain","authors":"Markus Borg","doi":"10.1109/ICSTW55395.2022.00014","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00014","url":null,"abstract":"This talk shares lessons learned from using search-based techniques for robustness testing in simulators.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"478 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115950123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00053
Felix Schuckert, Hanno Langweg, Basel Katt
Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.
{"title":"Systematic Generation of XSS and SQLi Vulnerabilities in PHP as Test Cases for Static Code Analysis","authors":"Felix Schuckert, Hanno Langweg, Basel Katt","doi":"10.1109/ICSTW55395.2022.00053","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00053","url":null,"abstract":"Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116352079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00030
A. Patel, Jaganmohan Chandrasekaran, Yu Lei, R. Kacker, D. R. Kuhn
Machine Learning (ML) models could exhibit biased behavior, or algorithmic discrimination, resulting in unfair or discriminatory outcomes. The bias in the ML model could emanate from various factors such as the training dataset, the choice of the ML algorithm, or the hyperparameters used to train the ML model. In addition to evaluating the model’s correctness, it is essential to test ML models for fair and unbiased behavior. In this paper, we present a combinatorial testing-based approach to perform fairness testing of ML models. Our approach is model agnostic and evaluates fairness violations of a pre-trained ML model in a two-step process. In the first step, we create an input parameter model from the training data set and then use the model to generate a t-way test set. In the second step, for each test, we modify the value of one or more protected attributes to see if we could find fairness violations. We performed an experimental evaluation of the proposed approach using ML models trained with tabular datasets. The results suggest that the proposed approach can successfully identify fairness violations in pre-trained ML models.
{"title":"A Combinatorial Approach to Fairness Testing of Machine Learning Models","authors":"A. Patel, Jaganmohan Chandrasekaran, Yu Lei, R. Kacker, D. R. Kuhn","doi":"10.1109/ICSTW55395.2022.00030","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00030","url":null,"abstract":"Machine Learning (ML) models could exhibit biased behavior, or algorithmic discrimination, resulting in unfair or discriminatory outcomes. The bias in the ML model could emanate from various factors such as the training dataset, the choice of the ML algorithm, or the hyperparameters used to train the ML model. In addition to evaluating the model’s correctness, it is essential to test ML models for fair and unbiased behavior. In this paper, we present a combinatorial testing-based approach to perform fairness testing of ML models. Our approach is model agnostic and evaluates fairness violations of a pre-trained ML model in a two-step process. In the first step, we create an input parameter model from the training data set and then use the model to generate a t-way test set. In the second step, for each test, we modify the value of one or more protected attributes to see if we could find fairness violations. We performed an experimental evaluation of the proposed approach using ML models trained with tabular datasets. The results suggest that the proposed approach can successfully identify fairness violations in pre-trained ML models.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124704715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00047
Tomohiro Takeda, Satoshi Masuda, K. Tsuda
Quality assurance is one of the most important activities in software development and maintenance. Software source codes are modified via change requests, functional improvement, and refactoring. When software changes, it is difficult to define the scope of test cases, and software testing costs tend to increase to maintain software quality. Therefore, change analysis is a challenge, and static testing is a key solution to this challenge. In this study, we propose new static testing metrics using mathematical graph analysis techniques for the control flow graph generated from the three-address code of the implementation codes based on our hypothesis of the existing correlation between the graph features and any software bugs. Five graph features are strongly correlated with the software bugs. Hence, our bug prediction model exhibits a better performance of 0.25 FN, 0.04 TN ratio, and 0.08 precision than a model based on the traditional bug prediction metrics, which are complexity, line of code (steps), and CRUD.
{"title":"Software Bug Prediction Model Based on Mathematical Graph Features Metrics","authors":"Tomohiro Takeda, Satoshi Masuda, K. Tsuda","doi":"10.1109/ICSTW55395.2022.00047","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00047","url":null,"abstract":"Quality assurance is one of the most important activities in software development and maintenance. Software source codes are modified via change requests, functional improvement, and refactoring. When software changes, it is difficult to define the scope of test cases, and software testing costs tend to increase to maintain software quality. Therefore, change analysis is a challenge, and static testing is a key solution to this challenge. In this study, we propose new static testing metrics using mathematical graph analysis techniques for the control flow graph generated from the three-address code of the implementation codes based on our hypothesis of the existing correlation between the graph features and any software bugs. Five graph features are strongly correlated with the software bugs. Hence, our bug prediction model exhibits a better performance of 0.25 FN, 0.04 TN ratio, and 0.08 precision than a model based on the traditional bug prediction metrics, which are complexity, line of code (steps), and CRUD.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115244055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00025
Naoko Okubo, Shoma Takatsuki, Yasushi Ueda
The functionality of Fault Detection, Isolation, and Recovery (FDIR) is a key factor in achieving the high reliability of space systems. The test suites for the FDIR functionality in JAXA’s space systems are manually designed by expert engineers with decades of experience to achieve as high combination coverage with a small test suite as possible. However, there are only a few engineers who can perform such ad-hoc test suite design. Therefore, FDIR functionality testing requires a supportive method to generate a test suite with the high combination coverage with the smallest size that can be executed in the development timescale. In this paper, we describe our experience in applying popular combinatorial testing techniques to generate the real-world earth-observation satellite’s FDIR functionality test suites and comparing them with conventional human-derived test suite. The purpose of this comparison is to check the capability of the existing combinatorial testing methods toward FDIR functionality testing. Here, the FDIR functionality testing were treated as combinatorial configuration testing. As a result, we found that the 2-way coverage rate by the human, PICT, ACTS and the HAYST method were 72.7%, 66.3%, 68.8% and 72.2% with 16, 10, 10 and 14 test cases, respectively.
{"title":"Experience of Combinatorial Testing toward Fault Detection, Isolation and Recovery Functionality","authors":"Naoko Okubo, Shoma Takatsuki, Yasushi Ueda","doi":"10.1109/ICSTW55395.2022.00025","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00025","url":null,"abstract":"The functionality of Fault Detection, Isolation, and Recovery (FDIR) is a key factor in achieving the high reliability of space systems. The test suites for the FDIR functionality in JAXA’s space systems are manually designed by expert engineers with decades of experience to achieve as high combination coverage with a small test suite as possible. However, there are only a few engineers who can perform such ad-hoc test suite design. Therefore, FDIR functionality testing requires a supportive method to generate a test suite with the high combination coverage with the smallest size that can be executed in the development timescale. In this paper, we describe our experience in applying popular combinatorial testing techniques to generate the real-world earth-observation satellite’s FDIR functionality test suites and comparing them with conventional human-derived test suite. The purpose of this comparison is to check the capability of the existing combinatorial testing methods toward FDIR functionality testing. Here, the FDIR functionality testing were treated as combinatorial configuration testing. As a result, we found that the 2-way coverage rate by the human, PICT, ACTS and the HAYST method were 72.7%, 66.3%, 68.8% and 72.2% with 16, 10, 10 and 14 test cases, respectively.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126481602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00032
D. Kuhn, M. Raunak, Charles B. Prado, Vinay C. Patil, R. Kacker
Combinatorial coverage measures have been defined and applied to a wide range of problems. These methods have been developed using measures that depend on the inclusion or absence of t-tuples of values in inputs and test cases. We extend these coverage measures to include the frequency of occurrence of combinations, in an approach that we refer to as combination frequency differencing (CFD). This method is particularly suited to artificial intelligence and machine learning (AI/ML) applications, where training data sets used in learning systems are dependent on the prevalence of various attributes of elements of class and non-class sets. We illustrate the use of this method by applying it to analyzing the susceptibility of physical unclonable functions (PUFs) to machine learning attacks. Preliminary results suggest that the method may be useful for identifying bit combinations that have a disproportionately strong influence on PUF response bit values.
{"title":"Combination Frequency Differencing for Identifying Design Weaknesses in Physical Unclonable Functions","authors":"D. Kuhn, M. Raunak, Charles B. Prado, Vinay C. Patil, R. Kacker","doi":"10.1109/ICSTW55395.2022.00032","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00032","url":null,"abstract":"Combinatorial coverage measures have been defined and applied to a wide range of problems. These methods have been developed using measures that depend on the inclusion or absence of t-tuples of values in inputs and test cases. We extend these coverage measures to include the frequency of occurrence of combinations, in an approach that we refer to as combination frequency differencing (CFD). This method is particularly suited to artificial intelligence and machine learning (AI/ML) applications, where training data sets used in learning systems are dependent on the prevalence of various attributes of elements of class and non-class sets. We illustrate the use of this method by applying it to analyzing the susceptibility of physical unclonable functions (PUFs) to machine learning attacks. Preliminary results suggest that the method may be useful for identifying bit combinations that have a disproportionately strong influence on PUF response bit values.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124453473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00015
Ildar Nigmatullin, A. Sadovykh, Nan Messe, S. Ebersold, J. Bruel
For the last 20 years, the number of vulnerabilities has increased near 20 times, according to NIST statistics. Vulnerabilities expose companies to risks that may seriously threaten their operations. Therefore, for a long time, it has been suggested to apply security engineering – the process of accumulating multiple techniques and practices to ensure a sufficient level of security and to prevent vulnerabilities in the early stages of software development, including establishing security requirements and proper security testing. The informal nature of security requirements makes it uneasy to maintain system security, eliminate redundancy and trace requirements down to verification artifacts such as test cases. To deal with this problem, Seamless Object-Oriented Requirements (SOORs) promote incorporating formal requirements representations and verification means together into requirements classes.This article is a position paper that discusses opportunities to implement the Requirements as Code (RQCODE) concepts, SOORs in Java, applied to the Software Security domain. We argue that this concept has an elegance and the potential to raise the attention of developers since it combines a lightweight formalization of requirements through security tests with seamless integration with off-the-shelf development environments, including modern Continuous Integration/Delivery platforms. The benefits of this approach are yet to be demonstrated in further studies in the VeriDevOps project.
{"title":"RQCODE – Towards Object-Oriented Requirements in the Software Security Domain","authors":"Ildar Nigmatullin, A. Sadovykh, Nan Messe, S. Ebersold, J. Bruel","doi":"10.1109/ICSTW55395.2022.00015","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00015","url":null,"abstract":"For the last 20 years, the number of vulnerabilities has increased near 20 times, according to NIST statistics. Vulnerabilities expose companies to risks that may seriously threaten their operations. Therefore, for a long time, it has been suggested to apply security engineering – the process of accumulating multiple techniques and practices to ensure a sufficient level of security and to prevent vulnerabilities in the early stages of software development, including establishing security requirements and proper security testing. The informal nature of security requirements makes it uneasy to maintain system security, eliminate redundancy and trace requirements down to verification artifacts such as test cases. To deal with this problem, Seamless Object-Oriented Requirements (SOORs) promote incorporating formal requirements representations and verification means together into requirements classes.This article is a position paper that discusses opportunities to implement the Requirements as Code (RQCODE) concepts, SOORs in Java, applied to the Software Security domain. We argue that this concept has an elegance and the potential to raise the attention of developers since it combines a lightweight formalization of requirements through security tests with seamless integration with off-the-shelf development environments, including modern Continuous Integration/Delivery platforms. The benefits of this approach are yet to be demonstrated in further studies in the VeriDevOps project.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00057
Azeem Ahmad, Erik Norrestam Held, O. Leifler, K. Sandahl
Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, known as non-deterministic tests, change their outcomes without any changes in the codebase, thus reducing the trust of developers during a software release as well as in the quality of a product. While rerunning test cases is a common approach, it is resource intensive, unreliable, and does not uncover the actual cause of test flakiness. Our paper evaluates an approach to identify randomness-related flaky. This paper used a divergence algorithm and execution tracing techniques to identify flaky tests, which resulted in the FlakyPy prototype. In addition, this paper discusses the cases where FlakyPy successfully identified the flaky test as well as those cases where FlakyPy failed. The papers discuss how the reporting mechanism of FlakyPy can help developers in identifying the root cause of randomness-related test flakiness. Thirty-two open-source projects were used in this. We concluded that FlakyPy can detect most of the randomness-related test flakiness. In addition, the reporting mechanism of FlakyPy reveals sufficient information about possible root causes of test flakiness.
{"title":"Identifying Randomness related Flaky Tests through Divergence and Execution Tracing","authors":"Azeem Ahmad, Erik Norrestam Held, O. Leifler, K. Sandahl","doi":"10.1109/ICSTW55395.2022.00057","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00057","url":null,"abstract":"Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, known as non-deterministic tests, change their outcomes without any changes in the codebase, thus reducing the trust of developers during a software release as well as in the quality of a product. While rerunning test cases is a common approach, it is resource intensive, unreliable, and does not uncover the actual cause of test flakiness. Our paper evaluates an approach to identify randomness-related flaky. This paper used a divergence algorithm and execution tracing techniques to identify flaky tests, which resulted in the FlakyPy prototype. In addition, this paper discusses the cases where FlakyPy successfully identified the flaky test as well as those cases where FlakyPy failed. The papers discuss how the reporting mechanism of FlakyPy can help developers in identifying the root cause of randomness-related test flakiness. Thirty-two open-source projects were used in this. We concluded that FlakyPy can detect most of the randomness-related test flakiness. In addition, the reporting mechanism of FlakyPy reveals sufficient information about possible root causes of test flakiness.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131890704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00042
Thomas Laurent, Stephen Gaffney, Anthony Ventresque
Mutation analysis is a well-known testing criterion that involves seeding changes in the system under test, i.e. creating mutants, to simulate faults, and measuring the capacity of the test suite to detect these changes. The question of whether real faults are coupled with the mutants is central, as it determines whether tests that detect the mutants will also detect faults that actually occur in code, making the mutants reasonable test requirements. Prior work has explored this question, notably using the Defects4J dataset in Java. As the dataset and the mutation tools used in these prior works have evolved, this work re-visits this question using the newest available versions in order to strengthen and extend prior results. In this work we use 337 real faults from 15 different projects in the Defects4J 2.0.0 dataset, 2,828 test suites, and two well-known Java mutation testing tools (Major and Pitest) to explore (i) to what extent real faults are coupled with mutants, (ii) how both tools compare in terms of producing mutants coupled with faults, (iii) the characteristics of the mutants that are coupled with real faults, and (iv) the characteristics of faults not coupled with the mutants. Most (80.7%) of the faults used were coupled with at least one mutant created by Pitest or Major, most often with mutants created by both tools. All operators used produced a low (<4%) proportion of coupled mutants, although some operators are exclusively coupled to more faults, i.e. coupled to faults where no other operator produces coupled mutants. Finally, faults not coupled with any mutants usually had small fix patches, and although the code related to these faults was mostly affected by the mutation operators used the mutants produces were still not coupled. Results confirm previous findings showing that the coupling effect mostly holds but that additional mutation operators are needed to capture all faults.
{"title":"Re-visiting the coupling between mutants and real faults with Defects4J 2.0","authors":"Thomas Laurent, Stephen Gaffney, Anthony Ventresque","doi":"10.1109/ICSTW55395.2022.00042","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00042","url":null,"abstract":"Mutation analysis is a well-known testing criterion that involves seeding changes in the system under test, i.e. creating mutants, to simulate faults, and measuring the capacity of the test suite to detect these changes. The question of whether real faults are coupled with the mutants is central, as it determines whether tests that detect the mutants will also detect faults that actually occur in code, making the mutants reasonable test requirements. Prior work has explored this question, notably using the Defects4J dataset in Java. As the dataset and the mutation tools used in these prior works have evolved, this work re-visits this question using the newest available versions in order to strengthen and extend prior results. In this work we use 337 real faults from 15 different projects in the Defects4J 2.0.0 dataset, 2,828 test suites, and two well-known Java mutation testing tools (Major and Pitest) to explore (i) to what extent real faults are coupled with mutants, (ii) how both tools compare in terms of producing mutants coupled with faults, (iii) the characteristics of the mutants that are coupled with real faults, and (iv) the characteristics of faults not coupled with the mutants. Most (80.7%) of the faults used were coupled with at least one mutant created by Pitest or Major, most often with mutants created by both tools. All operators used produced a low (<4%) proportion of coupled mutants, although some operators are exclusively coupled to more faults, i.e. coupled to faults where no other operator produces coupled mutants. Finally, faults not coupled with any mutants usually had small fix patches, and although the code related to these faults was mostly affected by the mutation operators used the mutants produces were still not coupled. Results confirm previous findings showing that the coupling effect mostly holds but that additional mutation operators are needed to capture all faults.","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133305643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1109/ICSTW55395.2022.00019
Wissam Mallouli
Software quality assurance (SQA) is a means and practice of monitoring the software engineering processes and methods used in a project to ensure proper quality of the software. It encompasses the entire software development life-cycle, including requirements engineering, software design, coding, source code reviews, software configuration management, testing , release management, software deployment and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation. In this talk, we will mainly focus on the testing activity part of the software development life-cycle. Its main objective is checking that software is satisfying a set of quality properties that are identified by the "ISO/IEC 25010:2011 System and Software Quality Model" standard [1] .
{"title":"Security Testing as part of Software Quality Assurance: Principles and Challenges","authors":"Wissam Mallouli","doi":"10.1109/ICSTW55395.2022.00019","DOIUrl":"https://doi.org/10.1109/ICSTW55395.2022.00019","url":null,"abstract":"Software quality assurance (SQA) is a means and practice of monitoring the software engineering processes and methods used in a project to ensure proper quality of the software. It encompasses the entire software development life-cycle, including requirements engineering, software design, coding, source code reviews, software configuration management, testing , release management, software deployment and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation. In this talk, we will mainly focus on the testing activity part of the software development life-cycle. Its main objective is checking that software is satisfying a set of quality properties that are identified by the \"ISO/IEC 25010:2011 System and Software Quality Model\" standard [1] .","PeriodicalId":147133,"journal":{"name":"2022 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114357653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}