首页 > 最新文献

Companion of the 2022 ACM/SPEC International Conference on Performance Engineering最新文献

英文 中文
Experience and Guidelines for Sorting Algorithm Choices and Their Energy Efficiency 排序算法选择及其能量效率的经验和指南
Maximilian Meissner, Supriya Kamthania, Nishant Rawtani, James Bucek, K. Lange, Samuel Kounev
Energy efficiency has become a major concern in the IT sector as the energy demand for data centers is projected to reach 1PWh per year by 2030. While hardware designers improve the energy efficiency of their products, software developers often do not consider or are unaware of the impact their design choices can make on the energy consumption caused by the execution of their applications. Energy efficiency improvements in applications can, to a certain extent, be achieved through compiler optimizations. Nonetheless, software developers should still make reasonable design choices to improve energy efficiency further. In this paper, we present the energy efficiency of common sorting algorithms under different pre-sorted conditions. Previous work in this field considered only randomized data. We expand on this previous work and measure the sorting algorithms' energy efficiency when the data is already partially sorted to 20% and 50%. Our presented experience is a case study intended to demonstrate the effect simple design choices, such as the selection of algorithm as well as its implementation, can make on energy efficiency. It is intended for industry practitioners to aid them in selecting a more energy-efficient algorithm for their problems at hand through helpful guidelines. Our results also can function as an incentive to make energy efficiency a non-functional requirement for tenders, and as a motivation for researchers to include energy efficiency as an additional quality criterion when studying the properties of algorithms.
到2030年,数据中心的能源需求预计将达到每年1PWh,能源效率已成为IT行业的主要关注点。当硬件设计人员提高其产品的能源效率时,软件开发人员通常没有考虑或不知道他们的设计选择可能对由应用程序执行引起的能源消耗产生的影响。在一定程度上,可以通过编译器优化来提高应用程序的能效。尽管如此,软件开发人员仍然应该做出合理的设计选择,以进一步提高能源效率。本文给出了常用排序算法在不同预排序条件下的能量效率。该领域以前的工作只考虑随机数据。我们扩展了之前的工作,并在数据已经部分排序到20%和50%时测量排序算法的能源效率。我们提出的经验是一个案例研究,旨在证明简单的设计选择,如算法的选择及其实施,可以对能源效率产生影响。它旨在通过有用的指导方针帮助行业从业者为他们手头的问题选择更节能的算法。我们的结果也可以作为一种激励,使能源效率成为招标的非功能要求,并作为研究人员在研究算法特性时将能源效率作为附加质量标准的动机。
{"title":"Experience and Guidelines for Sorting Algorithm Choices and Their Energy Efficiency","authors":"Maximilian Meissner, Supriya Kamthania, Nishant Rawtani, James Bucek, K. Lange, Samuel Kounev","doi":"10.1145/3491204.3527468","DOIUrl":"https://doi.org/10.1145/3491204.3527468","url":null,"abstract":"Energy efficiency has become a major concern in the IT sector as the energy demand for data centers is projected to reach 1PWh per year by 2030. While hardware designers improve the energy efficiency of their products, software developers often do not consider or are unaware of the impact their design choices can make on the energy consumption caused by the execution of their applications. Energy efficiency improvements in applications can, to a certain extent, be achieved through compiler optimizations. Nonetheless, software developers should still make reasonable design choices to improve energy efficiency further. In this paper, we present the energy efficiency of common sorting algorithms under different pre-sorted conditions. Previous work in this field considered only randomized data. We expand on this previous work and measure the sorting algorithms' energy efficiency when the data is already partially sorted to 20% and 50%. Our presented experience is a case study intended to demonstrate the effect simple design choices, such as the selection of algorithm as well as its implementation, can make on energy efficiency. It is intended for industry practitioners to aid them in selecting a more energy-efficient algorithm for their problems at hand through helpful guidelines. Our results also can function as an incentive to make energy efficiency a non-functional requirement for tenders, and as a motivation for researchers to include energy efficiency as an additional quality criterion when studying the properties of algorithms.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127035469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
B-MEG: Bottlenecked-Microservices Extraction Using Graph Neural Networks B-MEG:使用图神经网络的瓶颈微服务提取
Gagan Somashekar, Anurag Dutt, R. Vaddavalli, Sai Bhargav Varanasi, Anshul Gandhi
The microservices architecture enables independent development and maintenance of application components through its fine-grained and modular design. This has enabled rapid adoption of microservices architecture to build latency-sensitive online applications. In such online applications, it is critical to detect and mitigate sources of performance degradation (bottlenecks). However, the modular design of microservices architecture leads to a large graph of interacting microservices whose influence on each other is non-trivial. In this preliminary work, we explore the effectiveness of Graph Neural Network models in detecting bottlenecks. Preliminary analysis shows that our framework, B-MEG, produces promising results, especially for applications with complex call graphs. B-MEG shows up to 15% and 14% improvements in accuracy and precision, respectively, and close to 10× increase in recall for detecting bottlenecks compared to the technique used in existing work for bottleneck detection in microservices.
微服务体系结构通过其细粒度和模块化设计支持独立开发和维护应用程序组件。这使得快速采用微服务架构来构建对延迟敏感的在线应用程序成为可能。在这样的在线应用程序中,检测和减轻性能下降的来源(瓶颈)是至关重要的。然而,微服务架构的模块化设计导致了一个相互作用的微服务的大图,这些微服务彼此之间的影响是非常重要的。在这项初步工作中,我们探索了图神经网络模型在检测瓶颈方面的有效性。初步分析表明,我们的框架B-MEG产生了有希望的结果,特别是对于具有复杂调用图的应用程序。B-MEG在准确度和精度上分别提高了15%和14%,与微服务中现有瓶颈检测工作中使用的技术相比,检测瓶颈的召回率提高了近10倍。
{"title":"B-MEG: Bottlenecked-Microservices Extraction Using Graph Neural Networks","authors":"Gagan Somashekar, Anurag Dutt, R. Vaddavalli, Sai Bhargav Varanasi, Anshul Gandhi","doi":"10.1145/3491204.3527494","DOIUrl":"https://doi.org/10.1145/3491204.3527494","url":null,"abstract":"The microservices architecture enables independent development and maintenance of application components through its fine-grained and modular design. This has enabled rapid adoption of microservices architecture to build latency-sensitive online applications. In such online applications, it is critical to detect and mitigate sources of performance degradation (bottlenecks). However, the modular design of microservices architecture leads to a large graph of interacting microservices whose influence on each other is non-trivial. In this preliminary work, we explore the effectiveness of Graph Neural Network models in detecting bottlenecks. Preliminary analysis shows that our framework, B-MEG, produces promising results, especially for applications with complex call graphs. B-MEG shows up to 15% and 14% improvements in accuracy and precision, respectively, and close to 10× increase in recall for detecting bottlenecks compared to the technique used in existing work for bottleneck detection in microservices.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122139302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Benchmarking Runtime Scripting Performance in Wasmer 在Wasmer中测试运行时脚本性能
Devon Hockley, C. Williamson
In this paper, we explore the use of Wasmer and WebAssembly (WASM) as a sandboxed environment for general-purpose runtime scripting. Our work differs from prior research focusing on browser-based performance or SPEC benchmarks. In particular, we use micro-benchmarks and a macro-benchmark (both written in Rust) to compare execution times between WASM and native mode. We first measure which elements of script execution have the largest performance impact, using simple micro-benchmarks. Then we consider a Web proxy caching simulator, with different cache replacement policies, as a macro-benchmark. Using this simulator, we demonstrate a 5-10x performance penalty for WASM compared to native execution.
在本文中,我们将探索使用Wasmer和WebAssembly (WASM)作为通用运行时脚本的沙盒环境。我们的工作不同于先前关注基于浏览器的性能或SPEC基准的研究。特别是,我们使用微基准测试和宏基准测试(都是用Rust编写的)来比较WASM和本机模式之间的执行时间。我们首先使用简单的微基准测试来度量脚本执行的哪些元素对性能影响最大。然后,我们考虑一个具有不同缓存替换策略的Web代理缓存模拟器作为宏观基准。使用这个模拟器,我们演示了与本机执行相比,WASM的性能损失为5-10倍。
{"title":"Benchmarking Runtime Scripting Performance in Wasmer","authors":"Devon Hockley, C. Williamson","doi":"10.1145/3491204.3527477","DOIUrl":"https://doi.org/10.1145/3491204.3527477","url":null,"abstract":"In this paper, we explore the use of Wasmer and WebAssembly (WASM) as a sandboxed environment for general-purpose runtime scripting. Our work differs from prior research focusing on browser-based performance or SPEC benchmarks. In particular, we use micro-benchmarks and a macro-benchmark (both written in Rust) to compare execution times between WASM and native mode. We first measure which elements of script execution have the largest performance impact, using simple micro-benchmarks. Then we consider a Web proxy caching simulator, with different cache replacement policies, as a macro-benchmark. Using this simulator, we demonstrate a 5-10x performance penalty for WASM compared to native execution.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129787597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SPEC Efficiency Benchmark Development: How to Contribute to the Future of Energy Conservation SPEC效率基准开发:如何为节能的未来做出贡献
Maximilian Meissner, K. Lange, J. Arnold, Sanjay Sharma, Roger Tipley, Nishant Rawtani, D. Reiner, Mike Petrich, Aaron Cragin
A driving force behind the improvement of server efficiency in recent years is the use of SPEC benchmarks. They are used in mandatory government regulations, the ISO/IEC 21836:2020 standard, and product marketing, giving server manufacturers and buyers significant incentive to improve energy efficiency. To produce relevant results, benchmarks need to take into account future trends in hardware and software development, such as the introduction of new accelerators and workloads. To keep pace with the development of the fast moving IT landscape, SPEC plans to introduce a workload bounty program to encourage researchers to develop novel workloads. Submitted workloads will be considered for inclusion in future SPEC Efficiency benchmarks and rewarded. In this paper, we outline the process of energy-efficiency benchmark development. SPEC ensures the development of high-quality benchmarks for government regulations through its extensive experience and collaboration with stakeholders from industry, academia, and governments. One of the tools that emerged from this process is the Chauffeur Worklet Development Kit (WDK), which can be used by researchers to develop next-generation workloads to enhance the real-world relevance of future SPEC benchmarks, a critical element for the benchmarks to contribute to future energy conservation.
近年来服务器效率提高背后的推动力是SPEC基准测试的使用。它们被用于强制性政府法规、ISO/IEC 21836:2020标准和产品营销,给服务器制造商和买家提高能源效率的重大激励。为了产生相关的结果,基准测试需要考虑硬件和软件开发的未来趋势,例如引入新的加速器和工作负载。为了跟上快速发展的IT环境的发展,SPEC计划引入一个工作负载奖励计划,以鼓励研究人员开发新的工作负载。提交的工作负载将被考虑纳入未来的SPEC效率基准并获得奖励。在本文中,我们概述了能源效率基准的发展过程。SPEC通过其丰富的经验和与工业界、学术界和政府利益相关者的合作,确保为政府法规制定高质量的基准。从这个过程中出现的工具之一是Chauffeur Worklet Development Kit (WDK),研究人员可以使用它来开发下一代工作负载,以增强未来SPEC基准的现实相关性,这是基准有助于未来节能的关键因素。
{"title":"SPEC Efficiency Benchmark Development: How to Contribute to the Future of Energy Conservation","authors":"Maximilian Meissner, K. Lange, J. Arnold, Sanjay Sharma, Roger Tipley, Nishant Rawtani, D. Reiner, Mike Petrich, Aaron Cragin","doi":"10.1145/3491204.3527492","DOIUrl":"https://doi.org/10.1145/3491204.3527492","url":null,"abstract":"A driving force behind the improvement of server efficiency in recent years is the use of SPEC benchmarks. They are used in mandatory government regulations, the ISO/IEC 21836:2020 standard, and product marketing, giving server manufacturers and buyers significant incentive to improve energy efficiency. To produce relevant results, benchmarks need to take into account future trends in hardware and software development, such as the introduction of new accelerators and workloads. To keep pace with the development of the fast moving IT landscape, SPEC plans to introduce a workload bounty program to encourage researchers to develop novel workloads. Submitted workloads will be considered for inclusion in future SPEC Efficiency benchmarks and rewarded. In this paper, we outline the process of energy-efficiency benchmark development. SPEC ensures the development of high-quality benchmarks for government regulations through its extensive experience and collaboration with stakeholders from industry, academia, and governments. One of the tools that emerged from this process is the Chauffeur Worklet Development Kit (WDK), which can be used by researchers to develop next-generation workloads to enhance the real-world relevance of future SPEC benchmarks, a critical element for the benchmarks to contribute to future energy conservation.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Measuring Baseline Overheads in Different Orchestration Mechanisms for Large FaaS Workflows 在大型FaaS工作流的不同编排机制中测量基线开销
George Kousiouris, Chris Giannakos, K. Tserpes, Teta Stamati
Serverless environments have attracted significant attention in recent years as a result of their agility in execution as well as inherent scaling capabilities as a cloud-native execution model. While extensive analysis has been performed in various critical performance aspects of these environments, such as cold start times, the aspect of workflow orchestration delays has been neglected. Given that this paradigm has become more mature in recent years and application complexity has started to rise from a few functions to more complex application structures, the issue of delays in orchestrating these functions may become severe. In this work, one of the main open source FaaS platforms, Openwhisk, is utilized in order to measure and investigate its orchestration delays for the main sequence operator of the platform. These are compared to delays included in orchestration of functions through two alternative means, including the execution of orchestrator logic functions in supporting runtimes based on Node-RED. The delays inserted by each different orchestration mode are measured and modeled, while boundary points of selection between each mode are presented, based on the number and expected delay of the functions that constitute the workflow. It is indicative that in certain cases, the orchestration overheads might range from 0.29% to 235% compared to the beneficial computational time needed for the workflow functions. The results can extend simulation and estimation mechanisms with information on the orchestration overheads.
近年来,无服务器环境由于其执行的敏捷性以及作为云原生执行模型的固有扩展能力而引起了极大的关注。虽然对这些环境的各种关键性能方面(如冷启动时间)进行了广泛的分析,但忽略了工作流编排延迟方面。鉴于这种范例在最近几年变得更加成熟,并且应用程序的复杂性已经开始从几个功能上升到更复杂的应用程序结构,因此编排这些功能的延迟问题可能会变得严重。在这项工作中,主要的开源FaaS平台之一Openwhisk被用于测量和研究平台主序列操作器的编排延迟。将这些延迟与通过两种可选方法进行的功能编排中的延迟进行比较,包括在支持基于Node-RED的运行时中执行编排器逻辑功能。测量和建模了每种不同编排模式所插入的延迟,并根据构成工作流的功能的数量和预期延迟,给出了每种模式之间的选择边界点。这表明,在某些情况下,与工作流功能所需的有益计算时间相比,编排开销可能在0.29%到235%之间。结果可以使用有关编排开销的信息扩展模拟和评估机制。
{"title":"Measuring Baseline Overheads in Different Orchestration Mechanisms for Large FaaS Workflows","authors":"George Kousiouris, Chris Giannakos, K. Tserpes, Teta Stamati","doi":"10.1145/3491204.3527467","DOIUrl":"https://doi.org/10.1145/3491204.3527467","url":null,"abstract":"Serverless environments have attracted significant attention in recent years as a result of their agility in execution as well as inherent scaling capabilities as a cloud-native execution model. While extensive analysis has been performed in various critical performance aspects of these environments, such as cold start times, the aspect of workflow orchestration delays has been neglected. Given that this paradigm has become more mature in recent years and application complexity has started to rise from a few functions to more complex application structures, the issue of delays in orchestrating these functions may become severe. In this work, one of the main open source FaaS platforms, Openwhisk, is utilized in order to measure and investigate its orchestration delays for the main sequence operator of the platform. These are compared to delays included in orchestration of functions through two alternative means, including the execution of orchestrator logic functions in supporting runtimes based on Node-RED. The delays inserted by each different orchestration mode are measured and modeled, while boundary points of selection between each mode are presented, based on the number and expected delay of the functions that constitute the workflow. It is indicative that in certain cases, the orchestration overheads might range from 0.29% to 235% compared to the beneficial computational time needed for the workflow functions. The results can extend simulation and estimation mechanisms with information on the orchestration overheads.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116408845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TaskFlow TaskFlow
L. Versluis, A. Iosup
Datacenters need to become more power efficient for political and climate reasons. In this work, we introduce an idea for the community to further explore. We embed the idea in TaskFlow: a makespan conservative, energy-aware task placement policy for workflow scheduling. Using static, rough numbers and simulation, we obtain energy savings between [4.24, 47.00]% and [0.1, 13.6]%, respectively. We also present some pitfalls that should be investigated further, notably starvation of large tasks when using TaskFlow.
{"title":"TaskFlow","authors":"L. Versluis, A. Iosup","doi":"10.1145/3491204.3527466","DOIUrl":"https://doi.org/10.1145/3491204.3527466","url":null,"abstract":"Datacenters need to become more power efficient for political and climate reasons. In this work, we introduce an idea for the community to further explore. We embed the idea in TaskFlow: a makespan conservative, energy-aware task placement policy for workflow scheduling. Using static, rough numbers and simulation, we obtain energy savings between [4.24, 47.00]% and [0.1, 13.6]%, respectively. We also present some pitfalls that should be investigated further, notably starvation of large tasks when using TaskFlow.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"313 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123223337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FaaSET
R. Cordingly, W. Lloyd
Function-as-a-Service platforms require developers to use many different tools and services for function development, packaging, deployment, debugging, testing, orchestration of experiments, and analysis of results. Diverse toolchains are necessary due to the differences in how each platform is designed, the technologies they support, and the APIs they provide, leading to usability challenges for developers. To combine support for all of the tasks and tools into a unified workspace, we created the FaaS Experiment Toolkit (FaaSET). At the core of FaaSET is a Jupyter notebook development environment that enables developers to write functions, deploy them across multiple platforms, invoke and test them, automate experiments, and perform data analysis all in a single environment.
{"title":"FaaSET","authors":"R. Cordingly, W. Lloyd","doi":"10.1145/3491204.3527464","DOIUrl":"https://doi.org/10.1145/3491204.3527464","url":null,"abstract":"Function-as-a-Service platforms require developers to use many different tools and services for function development, packaging, deployment, debugging, testing, orchestration of experiments, and analysis of results. Diverse toolchains are necessary due to the differences in how each platform is designed, the technologies they support, and the APIs they provide, leading to usability challenges for developers. To combine support for all of the tasks and tools into a unified workspace, we created the FaaS Experiment Toolkit (FaaSET). At the core of FaaSET is a Jupyter notebook development environment that enables developers to write functions, deploy them across multiple platforms, invoke and test them, automate experiments, and perform data analysis all in a single environment.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122698720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automated Triage of Performance Change Points Using Time Series Analysis and Machine Learning: Data Challenge Paper 使用时间序列分析和机器学习的性能变化点的自动分类:数据挑战论文
A. Bauer, Martin Straesser, Lukas Beierlieb, Maximilian Meissner, Samuel Kounev
Performance regression testing is a foundation of modern DevOps processes and pipelines. Thus, the detection of change points, i.e., updates or commits that cause a significant change in the performance of the software, is of special importance. Typically, validating potential change points relies on humans, which is a considerable bottleneck and costs time and effort. This work proposes a solution to classify and detect change points automatically. On the performance test data set provided by MongoDB, our approach classifies potential change points with an AUC of 95.8% and accuracy of 94.3%, whereas the detection and classification of change points based on previous and the current commits exhibits an AUC of 92.0% and accuracy of 84.3%. In both cases, our approach can save time-consuming and costly human work.
性能回归测试是现代DevOps流程和管道的基础。因此,检测变更点,即导致软件性能显著变化的更新或提交,是特别重要的。通常,验证潜在的变更点依赖于人,这是一个相当大的瓶颈,并且需要花费时间和精力。本文提出了一种自动分类和检测变更点的解决方案。在MongoDB提供的性能测试数据集上,我们的方法对潜在变更点进行分类的AUC为95.8%,准确率为94.3%,而基于之前和当前提交对变更点进行检测和分类的AUC为92.0%,准确率为84.3%。在这两种情况下,我们的方法都可以节省耗时和昂贵的人工工作。
{"title":"Automated Triage of Performance Change Points Using Time Series Analysis and Machine Learning: Data Challenge Paper","authors":"A. Bauer, Martin Straesser, Lukas Beierlieb, Maximilian Meissner, Samuel Kounev","doi":"10.1145/3491204.3527486","DOIUrl":"https://doi.org/10.1145/3491204.3527486","url":null,"abstract":"Performance regression testing is a foundation of modern DevOps processes and pipelines. Thus, the detection of change points, i.e., updates or commits that cause a significant change in the performance of the software, is of special importance. Typically, validating potential change points relies on humans, which is a considerable bottleneck and costs time and effort. This work proposes a solution to classify and detect change points automatically. On the performance test data set provided by MongoDB, our approach classifies potential change points with an AUC of 95.8% and accuracy of 94.3%, whereas the detection and classification of change points based on previous and the current commits exhibits an AUC of 92.0% and accuracy of 84.3%. In both cases, our approach can save time-consuming and costly human work.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132040422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Design-time Performability Optimization of Runtime Adaptation Strategies 运行时适应策略的设计时性能优化
Martina Rapp, Max Scheerer, Ralf H. Reussner
Self-Adaptive Systems (SASs) adapt themselves to environmental changes during runtime to maintain Quality of Service (QoS) goals. Designing and optimizing the adaptation strategy of an SAS regarding its impact on quality properties is a challenging problem. Usually the design space of adaptation strategies is too large to be explored manually and, hence, requires automated support to find optimal strategies. Most approaches address this problem with optimization at runtime requiring the system is already implemented. However, one expects design-time optimized adaptation strategies to more effectively maintain QoS goals than purely runtime optimized strategies. Also formal guarantees benefit from designed and analysed strategies. We claim that design-time analysis and optimization of adaptation strategies improve in particular quality properties such as performability. To address the research gap between runtime optimization and the ability to make statements on the achieved quality, we envision an approach that builds upon the concept of Model-Based Quality Analysis (MBQA). Many approaches in MBQA address single aspects such as formal languages for adaptation strategies, architectural description languages or QoS prediction. However, they lack integration, which leads, for example to prediction approaches assuming rather static systems. In this paper, we envision an unified approach by considering several sub-approaches as building blocks for performability-based optimization of adaptation strategies at design-time.
自适应系统(SASs)在运行时适应环境变化,以维持服务质量(QoS)目标。针对其对质量性能的影响,设计和优化SAS的自适应策略是一个具有挑战性的问题。通常适应策略的设计空间太大,无法手工探索,因此需要自动化支持来找到最优策略。大多数解决这个问题的方法都是在运行时进行优化,要求系统已经实现。然而,人们期望设计时优化的适应策略比纯粹的运行时优化策略更有效地维护QoS目标。此外,正式担保也受益于设计和分析的策略。我们声称,设计时分析和优化的适应策略提高了特定的质量属性,如性能。为了解决运行时优化和对已实现的质量做出声明的能力之间的研究差距,我们设想了一种建立在基于模型的质量分析(MBQA)概念之上的方法。MBQA中的许多方法解决单个方面的问题,例如用于适应策略的形式语言、体系结构描述语言或QoS预测。然而,它们缺乏整合,这导致,例如,预测方法假设相当静态的系统。在本文中,我们设想了一种统一的方法,通过考虑几个子方法作为在设计时基于性能的适应性策略优化的构建块。
{"title":"Design-time Performability Optimization of Runtime Adaptation Strategies","authors":"Martina Rapp, Max Scheerer, Ralf H. Reussner","doi":"10.1145/3491204.3527471","DOIUrl":"https://doi.org/10.1145/3491204.3527471","url":null,"abstract":"Self-Adaptive Systems (SASs) adapt themselves to environmental changes during runtime to maintain Quality of Service (QoS) goals. Designing and optimizing the adaptation strategy of an SAS regarding its impact on quality properties is a challenging problem. Usually the design space of adaptation strategies is too large to be explored manually and, hence, requires automated support to find optimal strategies. Most approaches address this problem with optimization at runtime requiring the system is already implemented. However, one expects design-time optimized adaptation strategies to more effectively maintain QoS goals than purely runtime optimized strategies. Also formal guarantees benefit from designed and analysed strategies. We claim that design-time analysis and optimization of adaptation strategies improve in particular quality properties such as performability. To address the research gap between runtime optimization and the ability to make statements on the achieved quality, we envision an approach that builds upon the concept of Model-Based Quality Analysis (MBQA). Many approaches in MBQA address single aspects such as formal languages for adaptation strategies, architectural description languages or QoS prediction. However, they lack integration, which leads, for example to prediction approaches assuming rather static systems. In this paper, we envision an unified approach by considering several sub-approaches as building blocks for performability-based optimization of adaptation strategies at design-time.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129736689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How is Transient Behavior Addressed in Practice?: Insights from a Series of Expert Interviews 在实践中如何处理瞬态行为?:来自一系列专家访谈的见解
S. Beck, Sebastian Frank, Alireza Hakamian, André van Hoorn
Transient behavior occurs when a running software system changes from one steady-state to another. In microservice systems, such disruptions can, for example, be caused by continuous deployment, self-adaptation, and various failures. Although transient behavior could be captured in non-functional requirements, little is known of how that is handled in practice. Our objective was to study how architects and engineers approach runtime disruptions, which challenges they face, whether or not they specify transient behavior, and how currently employed tools and methods can be improved. To this end, we conducted semi-structured interviews with five experienced practitioners from major companies in Germany. We found that a big challenge in the industry is a lack of awareness of transient behavior by software stakeholders. Consequently, they often do not consider specifying it in non-functional requirements. Additionally, better tooling is needed to reduce the effort of analyzing transient behavior. We present two prototypes that we developed corresponding to these findings to improve the current situation. Beyond that, the insights we present can serve as pointers for interesting research directions for other researchers.
当运行中的软件系统从一种稳定状态转变为另一种稳定状态时,就会发生暂态行为。例如,在微服务系统中,这种中断可能由持续部署、自适应和各种故障引起。虽然可以在非功能需求中捕获瞬态行为,但在实践中如何处理却知之甚少。我们的目标是研究架构师和工程师如何处理运行时中断,他们面临的挑战,他们是否指定瞬态行为,以及如何改进当前使用的工具和方法。为此,我们对五位来自德国大公司的资深从业者进行了半结构化访谈。我们发现,这个行业面临的一大挑战是缺乏对软件涉众短暂行为的认识。因此,他们通常不考虑在非功能需求中指定它。此外,需要更好的工具来减少分析瞬态行为的工作量。我们提出了两个原型,我们开发了相应的这些发现,以改善目前的情况。除此之外,我们提出的见解可以作为其他研究人员有趣的研究方向的指针。
{"title":"How is Transient Behavior Addressed in Practice?: Insights from a Series of Expert Interviews","authors":"S. Beck, Sebastian Frank, Alireza Hakamian, André van Hoorn","doi":"10.1145/3491204.3527483","DOIUrl":"https://doi.org/10.1145/3491204.3527483","url":null,"abstract":"Transient behavior occurs when a running software system changes from one steady-state to another. In microservice systems, such disruptions can, for example, be caused by continuous deployment, self-adaptation, and various failures. Although transient behavior could be captured in non-functional requirements, little is known of how that is handled in practice. Our objective was to study how architects and engineers approach runtime disruptions, which challenges they face, whether or not they specify transient behavior, and how currently employed tools and methods can be improved. To this end, we conducted semi-structured interviews with five experienced practitioners from major companies in Germany. We found that a big challenge in the industry is a lack of awareness of transient behavior by software stakeholders. Consequently, they often do not consider specifying it in non-functional requirements. Additionally, better tooling is needed to reduce the effort of analyzing transient behavior. We present two prototypes that we developed corresponding to these findings to improve the current situation. Beyond that, the insights we present can serve as pointers for interesting research directions for other researchers.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"22 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133170035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Companion of the 2022 ACM/SPEC International Conference on Performance Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1