首页 > 最新文献

Companion of the 2022 ACM/SPEC International Conference on Performance Engineering最新文献

英文 中文
SPEC Research - Introducing the Predictive Data Analytics Working Group: Poster Paper SPEC研究-介绍预测数据分析工作组:海报纸
A. Bauer, Mark Leznik, Md Shahriar Iqbal, Daniel Seybold, Igor A. Trubin, Benjamin Erb, Jörg Domaschka, Pooyan Jamshidi
The research field of data analytics has grown significantly with the increase of gathered and available data. Accordingly, a large number of tools, metrics, and best practices have been proposed to make sense of this vast amount of data. To this end, benchmarking and standardization are needed to understand the proposed approaches better and continuously improve them. For this purpose, numerous associations and committees exist. One of them is SPEC (Standard Performance Evaluation Corporation), a non-profit corporation for the standardization and benchmarking of performance and energy evaluations. This paper gives an overview of the recently established SPEC RG Predictive Data Analytics Working Group. The mission of this group is to foster interaction between industry and academia by contributing research to the standardization and benchmarking of various aspects of data analytics.
随着收集和可用数据的增加,数据分析的研究领域得到了显著的发展。因此,已经提出了大量的工具、度量和最佳实践来理解这大量的数据。为此,需要进行基准测试和标准化,以便更好地理解所提出的方法并不断改进它们。为此目的,存在着许多协会和委员会。其中之一是SPEC(标准绩效评估公司),这是一家非营利公司,负责绩效和能源评估的标准化和基准。本文概述了最近成立的SPEC RG预测数据分析工作组。该组织的使命是通过为数据分析的各个方面的标准化和基准测试做出贡献,促进工业界和学术界之间的互动。
{"title":"SPEC Research - Introducing the Predictive Data Analytics Working Group: Poster Paper","authors":"A. Bauer, Mark Leznik, Md Shahriar Iqbal, Daniel Seybold, Igor A. Trubin, Benjamin Erb, Jörg Domaschka, Pooyan Jamshidi","doi":"10.1145/3491204.3527495","DOIUrl":"https://doi.org/10.1145/3491204.3527495","url":null,"abstract":"The research field of data analytics has grown significantly with the increase of gathered and available data. Accordingly, a large number of tools, metrics, and best practices have been proposed to make sense of this vast amount of data. To this end, benchmarking and standardization are needed to understand the proposed approaches better and continuously improve them. For this purpose, numerous associations and committees exist. One of them is SPEC (Standard Performance Evaluation Corporation), a non-profit corporation for the standardization and benchmarking of performance and energy evaluations. This paper gives an overview of the recently established SPEC RG Predictive Data Analytics Working Group. The mission of this group is to foster interaction between industry and academia by contributing research to the standardization and benchmarking of various aspects of data analytics.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129392189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SPEChpc 2021 Benchmark Suites for Modern HPC Systems 现代高性能计算系统的specpc 2021基准套件
Junjie Li, A. Bobyr, Swen Boehm, W. Brantley, H. Brunst, Aurélien Cavelan, S. Chandrasekaran, Jimmy Cheng, F. Ciorba, Mathew E. Colgrove, Tony Curtis, Christopher Daley, Mauricio H. Ferrato, Mayara Gimenes de Souza, N. Hagerty, R. Henschel, G. Juckeland, J. Kelling, Kelvin Li, Ron Lieberman, Kevin B. McMahon, Egor Melnichenko, M. A. Neggaz, Hiroshi Ono, C. Ponder, Dave Raddatz, Severin Schueller, Robert Searles, Fedor Vasilev, V. G. M. Vergara, Bo Wang, Bert Wesarg, Sandra Wienke, Miguel Zavala
The SPEChpc 2021 suites are application-based benchmarks de- signed to measure performance of modern HPC systems. The bench- marks support MPI, MPI+OpenMP, MPI+OpenMP target offload, MPI+OpenACC and are portable across all major HPC platforms.
specpc 2021套件是基于应用程序的基准测试,旨在衡量现代HPC系统的性能。基准测试支持MPI, MPI+OpenMP, MPI+OpenMP目标卸载,MPI+OpenACC,并且可以在所有主要的HPC平台上移植。
{"title":"SPEChpc 2021 Benchmark Suites for Modern HPC Systems","authors":"Junjie Li, A. Bobyr, Swen Boehm, W. Brantley, H. Brunst, Aurélien Cavelan, S. Chandrasekaran, Jimmy Cheng, F. Ciorba, Mathew E. Colgrove, Tony Curtis, Christopher Daley, Mauricio H. Ferrato, Mayara Gimenes de Souza, N. Hagerty, R. Henschel, G. Juckeland, J. Kelling, Kelvin Li, Ron Lieberman, Kevin B. McMahon, Egor Melnichenko, M. A. Neggaz, Hiroshi Ono, C. Ponder, Dave Raddatz, Severin Schueller, Robert Searles, Fedor Vasilev, V. G. M. Vergara, Bo Wang, Bert Wesarg, Sandra Wienke, Miguel Zavala","doi":"10.1145/3491204.3527498","DOIUrl":"https://doi.org/10.1145/3491204.3527498","url":null,"abstract":"The SPEChpc 2021 suites are application-based benchmarks de- signed to measure performance of modern HPC systems. The bench- marks support MPI, MPI+OpenMP, MPI+OpenMP target offload, MPI+OpenACC and are portable across all major HPC platforms.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130709266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation 利用人工智能和联合仿真优化雾计算环境的性能
Shreshth Tuli, G. Casale
This tutorial presents a performance engineering approach for optimizing the Quality of Service (QoS) of Edge/Fog/Cloud Computing environments using AI and Coupled-Simulation being developed as part of the Co-Simulation based Container Orchestration (COSCO) framework. It introduces fundamental AI and co-simulation concepts, their importance in QoS optimization and performance engineering challenges in the context of Fog computing. It also discusses how AI models, specifically, deep neural networks (DNNs), can be used in tandem with simulated estimates to take optimal resource management decisions. Additionally, we discuss a few use cases of training DNNs as surrogates to estimate key QoS metrics and utilize such models to build policies for dynamic scheduling in a distributed fog environment. The tutorial demonstrates these concepts using the COSCO framework. Metric monitoring and simulation primitives in COSCO demonstrates the efficacy of an AI and simulation based scheduler on a fog/cloud platform. Finally, we provide AI baselines for resource management problems that arise in the area of fog management.
本教程介绍了一种性能工程方法,用于使用AI和耦合仿真来优化边缘/雾/云计算环境的服务质量(QoS),这是基于协同仿真的容器编排(COSCO)框架的一部分。它介绍了基本的人工智能和联合仿真概念,它们在雾计算背景下的QoS优化和性能工程挑战中的重要性。它还讨论了人工智能模型,特别是深度神经网络(dnn),如何与模拟估计一起使用,以做出最佳的资源管理决策。此外,我们讨论了一些训练dnn作为替代品的用例,以估计关键的QoS指标,并利用这些模型在分布式雾环境中构建动态调度策略。本教程使用COSCO框架演示这些概念。中远集团的度量监控和仿真原语在雾/云平台上展示了基于人工智能和仿真的调度程序的有效性。最后,我们为雾管理领域出现的资源管理问题提供了人工智能基线。
{"title":"Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation","authors":"Shreshth Tuli, G. Casale","doi":"10.1145/3491204.3527490","DOIUrl":"https://doi.org/10.1145/3491204.3527490","url":null,"abstract":"This tutorial presents a performance engineering approach for optimizing the Quality of Service (QoS) of Edge/Fog/Cloud Computing environments using AI and Coupled-Simulation being developed as part of the Co-Simulation based Container Orchestration (COSCO) framework. It introduces fundamental AI and co-simulation concepts, their importance in QoS optimization and performance engineering challenges in the context of Fog computing. It also discusses how AI models, specifically, deep neural networks (DNNs), can be used in tandem with simulated estimates to take optimal resource management decisions. Additionally, we discuss a few use cases of training DNNs as surrogates to estimate key QoS metrics and utilize such models to build policies for dynamic scheduling in a distributed fog environment. The tutorial demonstrates these concepts using the COSCO framework. Metric monitoring and simulation primitives in COSCO demonstrates the efficacy of an AI and simulation based scheduler on a fog/cloud platform. Finally, we provide AI baselines for resource management problems that arise in the area of fog management.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Beware of the Interactions of Variability Layers When Reasoning about Evolution of MongoDB 在推理MongoDB的演化时,要注意可变性层之间的相互作用
Luc Lesoil, M. Acher, Arnaud Blouin, J. Jézéquel
With commits and releases, hundreds of tests are run on varying conditions (e.g., over different hardware and workload) that can help to understand evolution and ensure non-regression of software performance. We hypothesize that performance is not only sensitive to evolution of software, but also to different variability layers of its execution environment, spanning the hardware, the operating system, the build, or the workload processed by the software. Leveraging the MongoDB dataset, our results show that changes in hardware and workload can drastically impact performance evolution and thus should be taken into account when reasoning about performance. An open problem resulting from this study is how to manage the variability layers in order to efficiently test the performance evolution of a software.
通过提交和发布,数百个测试在不同的条件下运行(例如,在不同的硬件和工作负载上),这有助于理解演变并确保软件性能的非回归。我们假设性能不仅对软件的发展敏感,而且对其执行环境的不同可变性层敏感,包括硬件、操作系统、构建或软件处理的工作负载。利用MongoDB数据集,我们的结果表明硬件和工作负载的变化会极大地影响性能演变,因此在推理性能时应该考虑到这一点。该研究带来的一个开放性问题是如何管理可变性层,以便有效地测试软件的性能演变。
{"title":"Beware of the Interactions of Variability Layers When Reasoning about Evolution of MongoDB","authors":"Luc Lesoil, M. Acher, Arnaud Blouin, J. Jézéquel","doi":"10.1145/3491204.3527489","DOIUrl":"https://doi.org/10.1145/3491204.3527489","url":null,"abstract":"With commits and releases, hundreds of tests are run on varying conditions (e.g., over different hardware and workload) that can help to understand evolution and ensure non-regression of software performance. We hypothesize that performance is not only sensitive to evolution of software, but also to different variability layers of its execution environment, spanning the hardware, the operating system, the build, or the workload processed by the software. Leveraging the MongoDB dataset, our results show that changes in hardware and workload can drastically impact performance evolution and thus should be taken into account when reasoning about performance. An open problem resulting from this study is how to manage the variability layers in order to efficiently test the performance evolution of a software.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129170878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FADE: Towards Flexible and Adaptive Distance Estimation Considering Obstacles: Vision Paper 考虑障碍的灵活和自适应距离估计:视觉论文
Marius Hadry, Veronika Lesch, Samuel Kounev
In the last decades, especially intensified by the pandemic situation in which many people stay at home and order goods online, the need for efficient logistics systems has increased significantly. Hence, the performance of optimization techniques for logistic processes are becoming more and more important. These techniques often require estimates about distances to customers and facilities where operators have to choose between exact results or short computation times. In this vision paper, we propose an approach for Flexible and Adaptive Distance Estimation (FADE). The central idea is to abstract map knowledge into a less complex graph to trade off between computation time and result accuracy. We propose to further apply concepts from self-aware computing in order to support the dynamic adaptation to individual goals.
在过去几十年里,特别是由于疫情加剧,许多人呆在家里,在网上订购商品,对高效物流系统的需求大大增加。因此,物流过程的性能优化技术变得越来越重要。这些技术通常需要估计到客户和设施的距离,操作人员必须在精确的结果和较短的计算时间之间做出选择。在本文中,我们提出了一种灵活和自适应距离估计(FADE)方法。其核心思想是将地图知识抽象成一个不太复杂的图,在计算时间和结果精度之间进行权衡。我们建议进一步应用自我意识计算的概念,以支持对个体目标的动态适应。
{"title":"FADE: Towards Flexible and Adaptive Distance Estimation Considering Obstacles: Vision Paper","authors":"Marius Hadry, Veronika Lesch, Samuel Kounev","doi":"10.1145/3491204.3527493","DOIUrl":"https://doi.org/10.1145/3491204.3527493","url":null,"abstract":"In the last decades, especially intensified by the pandemic situation in which many people stay at home and order goods online, the need for efficient logistics systems has increased significantly. Hence, the performance of optimization techniques for logistic processes are becoming more and more important. These techniques often require estimates about distances to customers and facilities where operators have to choose between exact results or short computation times. In this vision paper, we propose an approach for Flexible and Adaptive Distance Estimation (FADE). The central idea is to abstract map knowledge into a less complex graph to trade off between computation time and result accuracy. We propose to further apply concepts from self-aware computing in order to support the dynamic adaptation to individual goals.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123306397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multiserver Approximation for Cloud Scaling Analysis 云扩展分析的多服务器近似
Siyu Zhou, C. Woodside
Queueing models of web service systems run at increasingly large scales, with large customer populations and with multiservers introduced by scaling up the services. "Scalable" multiserver approximations, in the sense that they that are insensitive to customer population size, are essential for solution in a reasonable time. A thorough analysis of the potential errors, which is needed before the approximations can be used with confidence, is the goal of this work. Three scalable approximations are evaluated: an equivalent single server SS, an approximation RF introduced by Rolia, and one based on a binomial distribution for queue state AB. AB and SS are suggested by previous work but have not been evaluated before. For AB and SS, multiple classes are merged into one to calculate the waiting. The analysis employs a novel traffic intensity measure for closed multiserver workloads. The vast majority of errors are less than 1%, with the worst cases being up to about 30%. The largest errors occur near the knee of the throughput/response time curves. Of the approximations, AB is consistently the most accurate and SS the least accurate.
web服务系统的排队模型以越来越大的规模运行,伴随着大量的客户群体和通过扩展服务引入的多服务器。“可伸缩的”多服务器近似,因为它们对客户数量大小不敏感,对于在合理的时间内解决问题至关重要。这项工作的目标是对潜在误差进行彻底的分析,这是在有信心使用近似值之前需要的。本文评估了三种可扩展的近似:等效的单服务器SS, Rolia引入的近似RF,以及基于队列状态AB的二项分布的近似RF。AB和SS是以前的工作提出的,但以前没有评估过。对于AB和SS,将多个类合并为一个类来计算等待时间。该分析为封闭的多服务器工作负载采用了一种新颖的流量强度度量。绝大多数错误率不到1%,最坏的情况下错误率高达30%左右。最大的错误发生在吞吐量/响应时间曲线的拐点附近。在这些近似中,AB始终是最准确的,而SS是最不准确的。
{"title":"A Multiserver Approximation for Cloud Scaling Analysis","authors":"Siyu Zhou, C. Woodside","doi":"10.1145/3491204.3527472","DOIUrl":"https://doi.org/10.1145/3491204.3527472","url":null,"abstract":"Queueing models of web service systems run at increasingly large scales, with large customer populations and with multiservers introduced by scaling up the services. \"Scalable\" multiserver approximations, in the sense that they that are insensitive to customer population size, are essential for solution in a reasonable time. A thorough analysis of the potential errors, which is needed before the approximations can be used with confidence, is the goal of this work. Three scalable approximations are evaluated: an equivalent single server SS, an approximation RF introduced by Rolia, and one based on a binomial distribution for queue state AB. AB and SS are suggested by previous work but have not been evaluated before. For AB and SS, multiple classes are merged into one to calculate the waiting. The analysis employs a novel traffic intensity measure for closed multiserver workloads. The vast majority of errors are less than 1%, with the worst cases being up to about 30%. The largest errors occur near the knee of the throughput/response time curves. Of the approximations, AB is consistently the most accurate and SS the least accurate.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126278583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing and Triaging Change Points 描述和分类变更点
Jing Chen, Haiyang Hu, Dongjin Yu
Testing software performance continuously can greatly benefit from automated verification done on continuous integration (CI) servers, but it generates a large number of performance test data with noise. To identify the change points in test data, statistical models have been developed in research. However, a considerable amount of detected change points is marked as the changes actually never need to be fixed (false positive). This work aims at giving a detailed understanding of the features of true positive change points and an automatic approach in change point triage, in order to alleviate project members' burdens. To achieve this goal, we begin by characterizing the change points using 31 features from three dimensions, namely time series, execution result, and file history. Then, we extract the proposed features for true positive and false positive change points, and train machine learning models to triage these change points. The results demonstrate that features can be efficiently employed to characterize change points. Our model achieves an AUC of 0.985 on a median basis.
持续测试软件性能可以从在持续集成(CI)服务器上完成的自动验证中获益,但是它会生成大量带有噪声的性能测试数据。为了识别测试数据中的变化点,研究中建立了统计模型。然而,相当数量的检测到的更改点被标记为实际上不需要修复的更改(误报)。这项工作旨在详细了解真正的积极变化点的特征,并在变化点分类中采用自动方法,以减轻项目成员的负担。为了实现这一目标,我们首先使用来自三个维度的31个特征来描述变更点,即时间序列、执行结果和文件历史。然后,我们提取真正和假正变化点的特征,并训练机器学习模型来分类这些变化点。结果表明,特征可以有效地用来表征变化点。我们的模型在中位数基础上实现了0.985的AUC。
{"title":"Characterizing and Triaging Change Points","authors":"Jing Chen, Haiyang Hu, Dongjin Yu","doi":"10.1145/3491204.3527487","DOIUrl":"https://doi.org/10.1145/3491204.3527487","url":null,"abstract":"Testing software performance continuously can greatly benefit from automated verification done on continuous integration (CI) servers, but it generates a large number of performance test data with noise. To identify the change points in test data, statistical models have been developed in research. However, a considerable amount of detected change points is marked as the changes actually never need to be fixed (false positive). This work aims at giving a detailed understanding of the features of true positive change points and an automatic approach in change point triage, in order to alleviate project members' burdens. To achieve this goal, we begin by characterizing the change points using 31 features from three dimensions, namely time series, execution result, and file history. Then, we extract the proposed features for true positive and false positive change points, and train machine learning models to triage these change points. The results demonstrate that features can be efficiently employed to characterize change points. Our model achieves an AUC of 0.985 on a median basis.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131158917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CTT: Load Test Automation for TOSCA-based Cloud Applications CTT:基于tosca的云应用程序的负载测试自动化
Thomas F. Düllmann, A. Hoorn, Vladimir Yussupov, P. Jakovits, Mainak Adhikari
Despite today's fast and rapid modeling and deployment capabilities to meet customer requirements in an agile manner, testing is still of utmost importance to avoid outages, unsatisfied customers, and performance problems. To tackle such issues, (load) testing is one of several approaches. In this paper, we introduce the Continuous Testing Tool (CTT), which enables the modeling of tests and test infrastructures along with the cloud system under test, as well as deploying and executing (load) tests against a fully deployed system in an automated manner. CTT employs the OASIS TOSCA Standard to enable end-to-end support for continuous testing of cloud-based applications. We demonstrate CTT's workflow, its architecture, as well as its application to DevOps-oriented load testing and load testing of data pipelines.
尽管今天有快速的建模和部署功能以敏捷的方式满足客户需求,但测试对于避免中断、客户不满意和性能问题仍然至关重要。为了解决这些问题,(负载)测试是几种方法之一。在本文中,我们介绍了持续测试工具(Continuous Testing Tool, CTT),它支持测试和测试基础架构以及被测云系统的建模,以及针对完全部署的系统以自动化的方式部署和执行(负载)测试。CTT采用OASIS TOSCA标准为基于云的应用程序的持续测试提供端到端支持。我们演示了CTT的工作流程,它的架构,以及它在面向devops的负载测试和数据管道负载测试中的应用。
{"title":"CTT: Load Test Automation for TOSCA-based Cloud Applications","authors":"Thomas F. Düllmann, A. Hoorn, Vladimir Yussupov, P. Jakovits, Mainak Adhikari","doi":"10.1145/3491204.3527484","DOIUrl":"https://doi.org/10.1145/3491204.3527484","url":null,"abstract":"Despite today's fast and rapid modeling and deployment capabilities to meet customer requirements in an agile manner, testing is still of utmost importance to avoid outages, unsatisfied customers, and performance problems. To tackle such issues, (load) testing is one of several approaches. In this paper, we introduce the Continuous Testing Tool (CTT), which enables the modeling of tests and test infrastructures along with the cloud system under test, as well as deploying and executing (load) tests against a fully deployed system in an automated manner. CTT employs the OASIS TOSCA Standard to enable end-to-end support for continuous testing of cloud-based applications. We demonstrate CTT's workflow, its architecture, as well as its application to DevOps-oriented load testing and load testing of data pipelines.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"467 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127705228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MAPLE 枫木
Chetan Phalak, Dheeraj Chahal, Aniruddha Sen, Mayank Mishra, Rekha Singhal
Many Artificial Intelligence (AI) applications are composed of multiple machine learning (ML) and deep learning (DL) models. Intelligent process automation (IPA) requires a combination (sequential or parallel) of models to complete an inference task. These models have unique resource requirements and hence exploring cost-efficient high performance deployment architecture especially on multiple clouds, is a challenge. We propose a high performance framework MAPLE, to support the building of applications using composable models. The MAPLE framework is an innovative system for AI applications to (1) recommend various model compositions (2) recommend appropriate system configuration based on the application's non-functional requirements (3) estimate the performance and cost of deployment on cloud for the chosen design.
{"title":"MAPLE","authors":"Chetan Phalak, Dheeraj Chahal, Aniruddha Sen, Mayank Mishra, Rekha Singhal","doi":"10.1145/3491204.3527497","DOIUrl":"https://doi.org/10.1145/3491204.3527497","url":null,"abstract":"Many Artificial Intelligence (AI) applications are composed of multiple machine learning (ML) and deep learning (DL) models. Intelligent process automation (IPA) requires a combination (sequential or parallel) of models to complete an inference task. These models have unique resource requirements and hence exploring cost-efficient high performance deployment architecture especially on multiple clouds, is a challenge. We propose a high performance framework MAPLE, to support the building of applications using composable models. The MAPLE framework is an innovative system for AI applications to (1) recommend various model compositions (2) recommend appropriate system configuration based on the application's non-functional requirements (3) estimate the performance and cost of deployment on cloud for the chosen design.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122091678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Garbage Collection Patterns to Extend Microbenchmarks for Big Data Workloads 分析垃圾收集模式,为大数据工作负载扩展微基准测试
Samyak S. Sarnayak, Aditi Ahuja, Pranav Kesavarapu, Aayush Naik, Santhosh Kumar Vasudevan, Subramaniam Kalambur
Java uses automatic memory allocation where the user does not have to explicitly free used memory. This is done by the garbage collector. Garbage Collection (GC) can take up a significant amount of time, especially in Big Data applications running large workloads where garbage collection can take up to 50 percent of the application's run time. Although benchmarks have been designed to trace garbage collection events, these are not specifically suited for Big Data workloads, due to their unique memory usage patterns. We have developed a free and open source pipeline to extract and analyze object-level details from any Java program including benchmarks and Big Data applications such as Hadoop. The data contains information such as lifetime, class and allocation site of every object allocated by the program. Through the analysis of this data, we propose a small set of benchmarks designed to emulate some of the patterns observed in Big Data applications. These benchmarks also allow us to experiment and compare some Java programming patterns.
Java使用自动内存分配,用户不必显式释放已使用的内存。这是由垃圾收集器完成的。垃圾收集(GC)可能会占用大量时间,特别是在运行大型工作负载的大数据应用程序中,垃圾收集可能会占用应用程序运行时间的50%。尽管基准测试被设计用来跟踪垃圾收集事件,但由于其独特的内存使用模式,这些基准测试并不特别适合大数据工作负载。我们已经开发了一个免费的开源管道,可以从任何Java程序中提取和分析对象级细节,包括基准测试和大数据应用程序(如Hadoop)。这些数据包含程序分配的每个对象的生命周期、类和分配位置等信息。通过对这些数据的分析,我们提出了一组小型基准,旨在模拟在大数据应用中观察到的一些模式。这些基准测试还允许我们试验和比较一些Java编程模式。
{"title":"Analysis of Garbage Collection Patterns to Extend Microbenchmarks for Big Data Workloads","authors":"Samyak S. Sarnayak, Aditi Ahuja, Pranav Kesavarapu, Aayush Naik, Santhosh Kumar Vasudevan, Subramaniam Kalambur","doi":"10.1145/3491204.3527473","DOIUrl":"https://doi.org/10.1145/3491204.3527473","url":null,"abstract":"Java uses automatic memory allocation where the user does not have to explicitly free used memory. This is done by the garbage collector. Garbage Collection (GC) can take up a significant amount of time, especially in Big Data applications running large workloads where garbage collection can take up to 50 percent of the application's run time. Although benchmarks have been designed to trace garbage collection events, these are not specifically suited for Big Data workloads, due to their unique memory usage patterns. We have developed a free and open source pipeline to extract and analyze object-level details from any Java program including benchmarks and Big Data applications such as Hadoop. The data contains information such as lifetime, class and allocation site of every object allocated by the program. Through the analysis of this data, we propose a small set of benchmarks designed to emulate some of the patterns observed in Big Data applications. These benchmarks also allow us to experiment and compare some Java programming patterns.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131216549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Companion of the 2022 ACM/SPEC International Conference on Performance Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1