Deema Alshoaibi, Ikram Chaabane, Kevin Hannigan, Ali Ouni, Mohamed Wiem Mkaouer
{"title":"性能回归检测引入代码变更:来自Git项目的经验","authors":"Deema Alshoaibi, Ikram Chaabane, Kevin Hannigan, Ali Ouni, Mohamed Wiem Mkaouer","doi":"10.1109/STC55697.2022.00036","DOIUrl":null,"url":null,"abstract":"For many software applications, performance is a critical Non-Functional requirement. Different software testing techniques are associated with various types of software testing, often related to performance regressions. Detecting code changes responsible for performance regression, for a rapidly evolving software with an increasing number of daily commits, is becoming arduous due to performance tests being time-consuming. The expense of running performance benchmarks, for all committed changes, has evolved to the bottleneck of detecting performance regression. Therefore, a recent technique called Perphecy was proposed to help, with quickly identifying performance regression introducing code changes, supporting the selection of performance tests, and reducing their execution time. However, Perphecy was not thoroughly tested on a large system, and so, its performance is still unknown in a real-world scenario. In this paper, we perform an in-depth analysis of Perphecy’s ability to identify performance regression introducing code changes on the open-source Git project. Our work challenges the ability of the model to sustain its performance when increasing the sample under test from 201 commits, to 8596 commits. In addition to verifying the scalability of the previous findings, we also test the efficiency of the proposed approach against a wider variety of performance regression introducing code changes. We provide insights into its advantages, limitations, and practical value.","PeriodicalId":170123,"journal":{"name":"2022 IEEE 29th Annual Software Technology Conference (STC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"On the Detection of Performance Regression Introducing Code Changes: Experience from the Git Project\",\"authors\":\"Deema Alshoaibi, Ikram Chaabane, Kevin Hannigan, Ali Ouni, Mohamed Wiem Mkaouer\",\"doi\":\"10.1109/STC55697.2022.00036\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For many software applications, performance is a critical Non-Functional requirement. Different software testing techniques are associated with various types of software testing, often related to performance regressions. Detecting code changes responsible for performance regression, for a rapidly evolving software with an increasing number of daily commits, is becoming arduous due to performance tests being time-consuming. The expense of running performance benchmarks, for all committed changes, has evolved to the bottleneck of detecting performance regression. Therefore, a recent technique called Perphecy was proposed to help, with quickly identifying performance regression introducing code changes, supporting the selection of performance tests, and reducing their execution time. However, Perphecy was not thoroughly tested on a large system, and so, its performance is still unknown in a real-world scenario. In this paper, we perform an in-depth analysis of Perphecy’s ability to identify performance regression introducing code changes on the open-source Git project. Our work challenges the ability of the model to sustain its performance when increasing the sample under test from 201 commits, to 8596 commits. In addition to verifying the scalability of the previous findings, we also test the efficiency of the proposed approach against a wider variety of performance regression introducing code changes. We provide insights into its advantages, limitations, and practical value.\",\"PeriodicalId\":170123,\"journal\":{\"name\":\"2022 IEEE 29th Annual Software Technology Conference (STC)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 29th Annual Software Technology Conference (STC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/STC55697.2022.00036\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 29th Annual Software Technology Conference (STC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/STC55697.2022.00036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the Detection of Performance Regression Introducing Code Changes: Experience from the Git Project
For many software applications, performance is a critical Non-Functional requirement. Different software testing techniques are associated with various types of software testing, often related to performance regressions. Detecting code changes responsible for performance regression, for a rapidly evolving software with an increasing number of daily commits, is becoming arduous due to performance tests being time-consuming. The expense of running performance benchmarks, for all committed changes, has evolved to the bottleneck of detecting performance regression. Therefore, a recent technique called Perphecy was proposed to help, with quickly identifying performance regression introducing code changes, supporting the selection of performance tests, and reducing their execution time. However, Perphecy was not thoroughly tested on a large system, and so, its performance is still unknown in a real-world scenario. In this paper, we perform an in-depth analysis of Perphecy’s ability to identify performance regression introducing code changes on the open-source Git project. Our work challenges the ability of the model to sustain its performance when increasing the sample under test from 201 commits, to 8596 commits. In addition to verifying the scalability of the previous findings, we also test the efficiency of the proposed approach against a wider variety of performance regression introducing code changes. We provide insights into its advantages, limitations, and practical value.