Pedro Delgado-Pérez, A. B. Sánchez, Sergio Segura, I. Medina-Bulo
{"title":"性能变异测试","authors":"Pedro Delgado-Pérez, A. B. Sánchez, Sergio Segura, I. Medina-Bulo","doi":"10.1002/stvr.1728","DOIUrl":null,"url":null,"abstract":"Performance bugs are known to be a major threat to the success of software products. Performance tests aim to detect performance bugs by executing the program through test cases and checking whether it exhibits a noticeable performance degradation. The principles of mutation testing, a well‐established testing technique for the assessment of test suites through the injection of artificial faults, could be exploited to evaluate and improve the detection power of performance tests. However, the application of mutation testing to assess performance tests, henceforth called performance mutation testing (PMT), is a novel research topic with numerous open challenges. In previous papers, we identified some key challenges related to PMT. In this work, we go a step further and explore the feasibility of applying PMT at the source‐code level in general‐purpose languages. To do so, we revisit concepts associated with classical mutation testing and design seven novel mutation operators to model known bug‐inducing patterns. As a proof of concept, we applied traditional mutation operators as well as performance mutation operators to open‐source C++ programs. The results reveal the potential of the new performance‐mutants to help assess and enhance performance tests when compared with traditional mutants. A review of live mutants in these programs suggests that they can induce the design of special test inputs. In addition to these promising results, our work brings a whole new set of challenges related to PMT, which will hopefully serve as a starting point for new contributions in the area.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"49 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2020-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Performance mutation testing\",\"authors\":\"Pedro Delgado-Pérez, A. B. Sánchez, Sergio Segura, I. Medina-Bulo\",\"doi\":\"10.1002/stvr.1728\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Performance bugs are known to be a major threat to the success of software products. Performance tests aim to detect performance bugs by executing the program through test cases and checking whether it exhibits a noticeable performance degradation. The principles of mutation testing, a well‐established testing technique for the assessment of test suites through the injection of artificial faults, could be exploited to evaluate and improve the detection power of performance tests. However, the application of mutation testing to assess performance tests, henceforth called performance mutation testing (PMT), is a novel research topic with numerous open challenges. In previous papers, we identified some key challenges related to PMT. In this work, we go a step further and explore the feasibility of applying PMT at the source‐code level in general‐purpose languages. To do so, we revisit concepts associated with classical mutation testing and design seven novel mutation operators to model known bug‐inducing patterns. As a proof of concept, we applied traditional mutation operators as well as performance mutation operators to open‐source C++ programs. The results reveal the potential of the new performance‐mutants to help assess and enhance performance tests when compared with traditional mutants. A review of live mutants in these programs suggests that they can induce the design of special test inputs. In addition to these promising results, our work brings a whole new set of challenges related to PMT, which will hopefully serve as a starting point for new contributions in the area.\",\"PeriodicalId\":49506,\"journal\":{\"name\":\"Software Testing Verification & Reliability\",\"volume\":\"49 1\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2020-01-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Testing Verification & Reliability\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1002/stvr.1728\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing Verification & Reliability","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/stvr.1728","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Performance bugs are known to be a major threat to the success of software products. Performance tests aim to detect performance bugs by executing the program through test cases and checking whether it exhibits a noticeable performance degradation. The principles of mutation testing, a well‐established testing technique for the assessment of test suites through the injection of artificial faults, could be exploited to evaluate and improve the detection power of performance tests. However, the application of mutation testing to assess performance tests, henceforth called performance mutation testing (PMT), is a novel research topic with numerous open challenges. In previous papers, we identified some key challenges related to PMT. In this work, we go a step further and explore the feasibility of applying PMT at the source‐code level in general‐purpose languages. To do so, we revisit concepts associated with classical mutation testing and design seven novel mutation operators to model known bug‐inducing patterns. As a proof of concept, we applied traditional mutation operators as well as performance mutation operators to open‐source C++ programs. The results reveal the potential of the new performance‐mutants to help assess and enhance performance tests when compared with traditional mutants. A review of live mutants in these programs suggests that they can induce the design of special test inputs. In addition to these promising results, our work brings a whole new set of challenges related to PMT, which will hopefully serve as a starting point for new contributions in the area.
期刊介绍:
The journal is the premier outlet for research results on the subjects of testing, verification and reliability. Readers will find useful research on issues pertaining to building better software and evaluating it.
The journal is unique in its emphasis on theoretical foundations and applications to real-world software development. The balance of theory, empirical work, and practical applications provide readers with better techniques for testing, verifying and improving the reliability of software.
The journal targets researchers, practitioners, educators and students that have a vested interest in results generated by high-quality testing, verification and reliability modeling and evaluation of software. Topics of special interest include, but are not limited to:
-New criteria for software testing and verification
-Application of existing software testing and verification techniques to new types of software, including web applications, web services, embedded software, aspect-oriented software, and software architectures
-Model based testing
-Formal verification techniques such as model-checking
-Comparison of testing and verification techniques
-Measurement of and metrics for testing, verification and reliability
-Industrial experience with cutting edge techniques
-Descriptions and evaluations of commercial and open-source software testing tools
-Reliability modeling, measurement and application
-Testing and verification of software security
-Automated test data generation
-Process issues and methods
-Non-functional testing