Rong Cao, Liang Bao, Panpan Zhangsun, Chase Wu, Shouxin Wei, Ren Sun, Ran Li, Zhe Zhang
{"title":"PTSSBench: a performance evaluation platform in support of automated parameter tuning of software systems","authors":"Rong Cao, Liang Bao, Panpan Zhangsun, Chase Wu, Shouxin Wei, Ren Sun, Ran Li, Zhe Zhang","doi":"10.1007/s10515-023-00402-z","DOIUrl":null,"url":null,"abstract":"<div><p>As software systems become increasingly large and complex, automated parameter tuning of software systems (PTSS) has been the focus of research and many tuning algorithms have been proposed recently. However, due to the lack of a unified platform for comparing and reproducing existing tuning algorithms, it remains a significant challenge for a user to choose an appropriate algorithm for a given software system. There are multiple reasons for this challenge, including diverse experimental conditions, lack of evaluations for different tasks, and excessive evaluation costs of tuning algorithms. In this paper, we propose an extensible and efficient benchmark, referred to as PTSSBench, which provides a unified platform for supporting a comparative study of different tuning algorithms via surrogate models and actual systems. We demonstrate the usability and efficiency of PTSSBench through comparative experiments of six state-of-the-art tuning algorithms from a holistic perspective and a task-oriented perspective. The experimental results show the necessity and effectiveness of parameter tuning for software systems and indicate that the PTSS problem remains an open problem. Moreover, PTSSBench allows extensive runs and in-depth analyses of parameter tuning algorithms, hence providing an efficient and effective way for researchers to develop new tuning algorithms and for users to choose appropriate tuning algorithms for their systems. The proposed PTSSBench benchmark together with the experimental results is made publicly available online as an open-source project.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automated Software Engineering","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10515-023-00402-z","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
As software systems become increasingly large and complex, automated parameter tuning of software systems (PTSS) has been the focus of research and many tuning algorithms have been proposed recently. However, due to the lack of a unified platform for comparing and reproducing existing tuning algorithms, it remains a significant challenge for a user to choose an appropriate algorithm for a given software system. There are multiple reasons for this challenge, including diverse experimental conditions, lack of evaluations for different tasks, and excessive evaluation costs of tuning algorithms. In this paper, we propose an extensible and efficient benchmark, referred to as PTSSBench, which provides a unified platform for supporting a comparative study of different tuning algorithms via surrogate models and actual systems. We demonstrate the usability and efficiency of PTSSBench through comparative experiments of six state-of-the-art tuning algorithms from a holistic perspective and a task-oriented perspective. The experimental results show the necessity and effectiveness of parameter tuning for software systems and indicate that the PTSS problem remains an open problem. Moreover, PTSSBench allows extensive runs and in-depth analyses of parameter tuning algorithms, hence providing an efficient and effective way for researchers to develop new tuning algorithms and for users to choose appropriate tuning algorithms for their systems. The proposed PTSSBench benchmark together with the experimental results is made publicly available online as an open-source project.
期刊介绍:
This journal details research, tutorial papers, survey and accounts of significant industrial experience in the foundations, techniques, tools and applications of automated software engineering technology. This includes the study of techniques for constructing, understanding, adapting, and modeling software artifacts and processes.
Coverage in Automated Software Engineering examines both automatic systems and collaborative systems as well as computational models of human software engineering activities. In addition, it presents knowledge representations and artificial intelligence techniques applicable to automated software engineering, and formal techniques that support or provide theoretical foundations. The journal also includes reviews of books, software, conferences and workshops.