首页 > 最新文献

Companion of the 2018 ACM/SPEC International Conference on Performance Engineering最新文献

英文 中文
How to Apply Modeling to Compare Options and Select the Appropriate Cloud Platform 如何应用建模来比较选项并选择合适的云平台
B. Zibitsker, Alex Lupersolsky
Organizations want to take advantage of the flexibility and scalability of Cloud platforms. By migrating to the Cloud, they hope to develop and implement new applications faster with lower cost. Amazon AWS, Microsoft Azure, Google, IBM, Oracle and others Cloud providers support different DBMS like Snowflake, Redshift, Teradata Vantage, and others. These platforms have different architectures, mechanisms of allocation and management of resources, and levels of sophistication of DBMS optimizers which affect performance, scalability and cost. As a result, the response time, CPU Service Time and the number of I/Os for the same query, accessing the similar table in the Cloud could be significantly different than On Prem. In order to select the appropriate Cloud platform as a first step we perform a Workload Characterization for On Prem Data Warehouse. Each Data Warehouse workload represents a specific line of business and includes activity of many users generating concurrently simple and complex queries accessing data from different tables. Each workload has different demands for resources and different Response Time and Throughput Service Level Goals. In this presentation we will review results of the workload characterization for an On Prem Data Warehouse environment. During the second step we collected measurement data for standard TPC-DS benchmark tests performed in AWS Vantage, Redshift and Snowflake Cloud platform for different sizes of the data sets and different number of concurrent users. During the third step we used the results of the workload characterization and measurement data collected during the benchmark to modify BEZNext On Prem Closed Queueing model to model individual Clouds. And finally, during the fourth step we used our Model to take into consideration differences in concurrency, priorities and resource allocation to different workloads. BEZNext optimization algorithms incorporating Graduate search mechanism are used to find the AWS instance type and minimum number of instances which will be required to meet SLGs for each of the workloads. Publicly available information about the cost of the different AWS instances is used to predict the cost of supporting workloads in the Cloud month by month during next 12 months.
组织希望利用云平台的灵活性和可伸缩性。通过迁移到云,他们希望以更低的成本更快地开发和实现新的应用程序。亚马逊AWS,微软Azure, b谷歌,IBM, Oracle和其他云提供商支持不同的DBMS,如Snowflake, Redshift, Teradata Vantage等。这些平台具有不同的体系结构、资源分配和管理机制,以及影响性能、可伸缩性和成本的DBMS优化器的复杂程度。因此,访问云中的类似表的相同查询的响应时间、CPU服务时间和I/ o数量可能与On Prem有很大不同。为了选择合适的云平台作为第一步,我们对On Prem数据仓库执行工作负载表征。每个数据仓库工作负载代表一个特定的业务线,包括许多用户的活动,这些用户同时生成简单和复杂的查询,访问来自不同表的数据。每个工作负载都有不同的资源需求和不同的响应时间和吞吐量服务水平目标。在本演示中,我们将回顾On Prem数据仓库环境的工作负载表征结果。在第二步中,我们收集了在AWS Vantage、Redshift和Snowflake Cloud平台上针对不同规模的数据集和不同并发用户数进行的标准TPC-DS基准测试的测量数据。在第三步中,我们使用在基准测试期间收集的工作负载表征和测量数据的结果来修改BEZNext On Prem封闭队列模型,以对单个云进行建模。最后,在第四步中,我们使用我们的模型来考虑并发性、优先级和不同工作负载的资源分配差异。结合毕业生搜索机制的BEZNext优化算法用于查找满足每个工作负载slg所需的AWS实例类型和最小实例数量。有关不同AWS实例成本的公开可用信息用于预测未来12个月内每月在云中支持工作负载的成本。
{"title":"How to Apply Modeling to Compare Options and Select the Appropriate Cloud Platform","authors":"B. Zibitsker, Alex Lupersolsky","doi":"10.1145/3375555.3384938","DOIUrl":"https://doi.org/10.1145/3375555.3384938","url":null,"abstract":"Organizations want to take advantage of the flexibility and scalability of Cloud platforms. By migrating to the Cloud, they hope to develop and implement new applications faster with lower cost. Amazon AWS, Microsoft Azure, Google, IBM, Oracle and others Cloud providers support different DBMS like Snowflake, Redshift, Teradata Vantage, and others. These platforms have different architectures, mechanisms of allocation and management of resources, and levels of sophistication of DBMS optimizers which affect performance, scalability and cost. As a result, the response time, CPU Service Time and the number of I/Os for the same query, accessing the similar table in the Cloud could be significantly different than On Prem. In order to select the appropriate Cloud platform as a first step we perform a Workload Characterization for On Prem Data Warehouse. Each Data Warehouse workload represents a specific line of business and includes activity of many users generating concurrently simple and complex queries accessing data from different tables. Each workload has different demands for resources and different Response Time and Throughput Service Level Goals. In this presentation we will review results of the workload characterization for an On Prem Data Warehouse environment. During the second step we collected measurement data for standard TPC-DS benchmark tests performed in AWS Vantage, Redshift and Snowflake Cloud platform for different sizes of the data sets and different number of concurrent users. During the third step we used the results of the workload characterization and measurement data collected during the benchmark to modify BEZNext On Prem Closed Queueing model to model individual Clouds. And finally, during the fourth step we used our Model to take into consideration differences in concurrency, priorities and resource allocation to different workloads. BEZNext optimization algorithms incorporating Graduate search mechanism are used to find the AWS instance type and minimum number of instances which will be required to meet SLGs for each of the workloads. Publicly available information about the cost of the different AWS instances is used to predict the cost of supporting workloads in the Cloud month by month during next 12 months.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"83 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72555523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Issues Arising in Using Kernel Traces to Make a Performance Model 使用内核跟踪建立性能模型时产生的问题
C. Woodside, S. Tjandra, Gabriel Seyoum
This report is prompted by some recent experience with building performance models from kernel traces recorded by LTTng, a tracer that is part of Linux, and by observing other researchers who are analyzing performance issues directly from the traces. It briefly distinguishes the scope of the two approaches, regarding the model as an abstraction of the trace, and the model-building as a form of machine learning. For model building it then discusses how various limitations of the kernel trace information limit the model and its capabilities and how the limitations might be overcome by using additional information of different kinds. The overall perspective is a tradeoff between effort and model capability.
这篇报告的灵感来自于最近从ltng (Linux的一部分的跟踪程序)记录的内核跟踪构建性能模型的一些经验,以及观察直接从跟踪分析性能问题的其他研究人员。它简要地区分了这两种方法的范围,将模型视为跟踪的抽象,而将模型构建视为机器学习的一种形式。对于模型构建,然后讨论了内核跟踪信息的各种限制如何限制模型及其功能,以及如何通过使用不同类型的附加信息来克服这些限制。整体视角是工作量和模型能力之间的权衡。
{"title":"Issues Arising in Using Kernel Traces to Make a Performance Model","authors":"C. Woodside, S. Tjandra, Gabriel Seyoum","doi":"10.1145/3375555.3384937","DOIUrl":"https://doi.org/10.1145/3375555.3384937","url":null,"abstract":"This report is prompted by some recent experience with building performance models from kernel traces recorded by LTTng, a tracer that is part of Linux, and by observing other researchers who are analyzing performance issues directly from the traces. It briefly distinguishes the scope of the two approaches, regarding the model as an abstraction of the trace, and the model-building as a form of machine learning. For model building it then discusses how various limitations of the kernel trace information limit the model and its capabilities and how the limitations might be overcome by using additional information of different kinds. The overall perspective is a tradeoff between effort and model capability.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90407532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Efficiency Analysis of Compiler Optimizations on the SPEC CPU 2017 Benchmark Suite 基于SPEC CPU 2017基准测试套件的编译器优化能效分析
Norbert Schmitt, James Bucek, K. Lange, Samuel Kounev
The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software themselves must become more energy-efficient. It is the software that controls the hardware to a considerable degree. In this work-in-progress paper, we present a first analysis of how compiler optimizations can influence energy efficiency. We base our analysis on workloads of the SPEC CPU 2017 benchmark. With 43 benchmarks from different domains, including integer and floating-point heavy computations executed on a state-of-the-art server system for cloud applications, SPEC CPU 2017 offers a representative selection of workloads.
云服务的增长导致越来越多的数据中心变得越来越大,并消耗大量的电力。为了提高能源效率,实际的服务器设备和软件本身都必须变得更加节能。软件在很大程度上控制着硬件。在这篇正在进行的论文中,我们首次分析了编译器优化如何影响能源效率。我们的分析基于SPEC CPU 2017基准的工作负载。SPEC CPU 2017提供了来自不同领域的43个基准测试,包括在最先进的云应用服务器系统上执行的整数和浮点繁重计算,提供了具有代表性的工作负载选择。
{"title":"Energy Efficiency Analysis of Compiler Optimizations on the SPEC CPU 2017 Benchmark Suite","authors":"Norbert Schmitt, James Bucek, K. Lange, Samuel Kounev","doi":"10.1145/3375555.3383759","DOIUrl":"https://doi.org/10.1145/3375555.3383759","url":null,"abstract":"The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software themselves must become more energy-efficient. It is the software that controls the hardware to a considerable degree. In this work-in-progress paper, we present a first analysis of how compiler optimizations can influence energy efficiency. We base our analysis on workloads of the SPEC CPU 2017 benchmark. With 43 benchmarks from different domains, including integer and floating-point heavy computations executed on a state-of-the-art server system for cloud applications, SPEC CPU 2017 offers a representative selection of workloads.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87623382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Extended Abstract of Performance Analysis and Prediction of Model Transformation 模型转换性能分析与预测扩展摘要
Vijayshree Vijayshree, Markus Frank, Steffen Becker
In the software development process, model transformation is increasingly assimilated. However, systems being developed with model transformation sometimes grow in size and become complex. Meanwhile, the performance of model transformation tends to decrease. Hence, performance is an important quality of model transformation. According to current research model transformation performance focuses on optimising the engines internally. However, there exists no research activities to support transformation engineer to identify performance bottleneck in the transformation rules and hence, to predict the overall performance. In this paper we vision our aim at providing an approach of monitoring and profiling to identify the root cause of performance issues in the transformation rules and to predict the performance of model transformation. This will enable software engineers to systematically identify performance issues as well as predict the performance of model transformation.
在软件开发过程中,模型转换越来越被同化。然而,使用模型转换开发的系统有时会扩大规模并变得复杂。同时,模型转换的性能有下降的趋势。因此,性能是模型转换的一个重要品质。根据目前的研究,模型转换性能主要集中在发动机内部的优化。然而,目前还没有研究活动来支持转换工程师识别转换规则中的性能瓶颈,从而预测整体性能。在本文中,我们的目标是提供一种监控和分析的方法,以识别转换规则中性能问题的根本原因,并预测模型转换的性能。这将使软件工程师能够系统地识别性能问题以及预测模型转换的性能。
{"title":"Extended Abstract of Performance Analysis and Prediction of Model Transformation","authors":"Vijayshree Vijayshree, Markus Frank, Steffen Becker","doi":"10.1145/3358960.3383769","DOIUrl":"https://doi.org/10.1145/3358960.3383769","url":null,"abstract":"In the software development process, model transformation is increasingly assimilated. However, systems being developed with model transformation sometimes grow in size and become complex. Meanwhile, the performance of model transformation tends to decrease. Hence, performance is an important quality of model transformation. According to current research model transformation performance focuses on optimising the engines internally. However, there exists no research activities to support transformation engineer to identify performance bottleneck in the transformation rules and hence, to predict the overall performance. In this paper we vision our aim at providing an approach of monitoring and profiling to identify the root cause of performance issues in the transformation rules and to predict the performance of model transformation. This will enable software engineers to systematically identify performance issues as well as predict the performance of model transformation.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79297418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Acceleration Opportunities in Linear Algebra Applications via Idiom Recognition 通过习语识别在线性代数应用中的加速机会
J. P. L. Carvalho, Braedy Kuzma, G. Araújo
General matrix-matrix multiplication (GEMM) is a critical operation in many application domains [1]. It is a central building block of deep learning algorithms, computer graphics operations, and other linear algebra dominated applications. Due to this, GEMM has been extensively studied and optimized, resulting in libraries of exceptional quality such as BLAS, Eigen, and other platform specific implementations such as MKL (Intel) and ESSL (IBM) [2,3]. Despite these successes, the GeMM idiom continues to be re-implemented by programmers, without consideration for the intricacies already accounted for by the aforementioned libraries. To this end, this project aims to provide transparent adoption of high-performance implementations of GEMM through a novel optimization pass implemented within the LLVM framework using idiom recognition techniques[4]. Sub-optimal implementations of GEMM are replaced by equivalent library calls.
一般矩阵-矩阵乘法(GEMM)是许多应用领域的关键运算[1]。它是深度学习算法、计算机图形操作和其他线性代数主导应用的核心构建块。因此,GEMM得到了广泛的研究和优化,产生了质量卓越的库,如BLAS、Eigen,以及其他特定平台的实现,如MKL (Intel)和ESSL (IBM)[2,3]。尽管取得了这些成功,但程序员仍在继续重新实现GeMM习语,而不考虑前面提到的库所带来的复杂性。为此,本项目旨在通过使用成语识别技术在LLVM框架内实现的新颖优化通道,透明地采用GEMM的高性能实现[4]。GEMM的次优实现被等效的库调用取代。
{"title":"Acceleration Opportunities in Linear Algebra Applications via Idiom Recognition","authors":"J. P. L. Carvalho, Braedy Kuzma, G. Araújo","doi":"10.1145/3375555.3383586","DOIUrl":"https://doi.org/10.1145/3375555.3383586","url":null,"abstract":"General matrix-matrix multiplication (GEMM) is a critical operation in many application domains [1]. It is a central building block of deep learning algorithms, computer graphics operations, and other linear algebra dominated applications. Due to this, GEMM has been extensively studied and optimized, resulting in libraries of exceptional quality such as BLAS, Eigen, and other platform specific implementations such as MKL (Intel) and ESSL (IBM) [2,3]. Despite these successes, the GeMM idiom continues to be re-implemented by programmers, without consideration for the intricacies already accounted for by the aforementioned libraries. To this end, this project aims to provide transparent adoption of high-performance implementations of GEMM through a novel optimization pass implemented within the LLVM framework using idiom recognition techniques[4]. Sub-optimal implementations of GEMM are replaced by equivalent library calls.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89916387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Migrating a Recommendation System to Cloud Using ML Workflow 使用ML工作流将推荐系统迁移到云端
Dheeraj Chahal, Ravi Ojha, Sharod Roy Choudhury, M. Nambiar
Inference is the production stage of machine learning workflow in which a trained model is used to infer or predict with real world data. A recommendation system improves customer experience by displaying most relevant items based on historical behavior of a customer. Machine learning models built for recommendation systems are deployed either on-premise or migrated to a cloud for inference in real time or batch. A recommendation system should be cost effective while honoring service level agreements (SLAs). In this work we discuss on-premise implementation of our recommendation system called iPrescribe. We show a methodology to migrate on-premise implementation of recommendation system to a cloud using ML workflow. We also present our study on performance of recommendation system model when deployed on different types of virtual instances.
推理是机器学习工作流程的生产阶段,在这个阶段中,训练好的模型被用来用真实世界的数据进行推断或预测。推荐系统通过根据客户的历史行为显示最相关的项目来改善客户体验。为推荐系统构建的机器学习模型要么部署在本地,要么迁移到云端进行实时或批量推理。在遵守服务水平协议(sla)的同时,推荐系统应该具有成本效益。在这项工作中,我们讨论了我们的推荐系统iprescription的内部实现。我们展示了一种使用ML工作流将推荐系统的本地实现迁移到云的方法。我们还研究了推荐系统模型在不同类型的虚拟实例上部署时的性能。
{"title":"Migrating a Recommendation System to Cloud Using ML Workflow","authors":"Dheeraj Chahal, Ravi Ojha, Sharod Roy Choudhury, M. Nambiar","doi":"10.1145/3375555.3384423","DOIUrl":"https://doi.org/10.1145/3375555.3384423","url":null,"abstract":"Inference is the production stage of machine learning workflow in which a trained model is used to infer or predict with real world data. A recommendation system improves customer experience by displaying most relevant items based on historical behavior of a customer. Machine learning models built for recommendation systems are deployed either on-premise or migrated to a cloud for inference in real time or batch. A recommendation system should be cost effective while honoring service level agreements (SLAs). In this work we discuss on-premise implementation of our recommendation system called iPrescribe. We show a methodology to migrate on-premise implementation of recommendation system to a cloud using ML workflow. We also present our study on performance of recommendation system model when deployed on different types of virtual instances.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"2017 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87787926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Beyond Microbenchmarks: The SPEC-RG Vision for a Comprehensive Serverless Benchmark 超越微基准:全面无服务器基准的SPEC-RG愿景
Erwin Van Eyk, Joel Scheuner, Simon Eismann, Cristina L. Abad, A. Iosup
Serverless computing services, such as Function-as-a-Service (FaaS), hold the attractive promise of a high level of abstraction and high performance, combined with the minimization of operational logic. Several large ecosystems of serverless platforms, both open- and closed-source, aim to realize this promise. Consequently, a lucrative market has emerged. However, the performance trade-offs of these systems are not well-understood. Moreover, it is exactly the high level of abstraction and the opaqueness of the operational-side that make performance evaluation studies of serverless platforms challenging. Learning from the history of IT platforms, we argue that a benchmark for serverless platforms could help address this challenge. We envision a comprehensive serverless benchmark, which we contrast to the narrow focus of prior work in this area. We argue that a comprehensive benchmark will need to take into account more than just runtime overhead, and include notions of cost, realistic workloads, more (open-source) platforms, and cloud integrations. Finally, we show through preliminary real-world experiments how such a benchmark can help compare the performance overhead when running a serverless workload on state-of-the-art platforms.
无服务器计算服务,如功能即服务(FaaS),具有高度抽象和高性能的诱人承诺,并结合了操作逻辑的最小化。几个大型的无服务器平台生态系统,包括开源和闭源,都致力于实现这一承诺。因此,一个利润丰厚的市场出现了。然而,这些系统的性能权衡并没有得到很好的理解。此外,正是操作端的高度抽象和不透明使得无服务器平台的性能评估研究具有挑战性。从IT平台的历史中学习,我们认为无服务器平台的基准可以帮助解决这一挑战。我们设想了一个全面的无服务器基准测试,这与该领域先前工作的狭隘焦点形成了对比。我们认为,全面的基准测试需要考虑的不仅仅是运行时开销,还包括成本、实际工作负载、更多(开源)平台和云集成等概念。最后,通过初步的实际实验,我们展示了在最先进的平台上运行无服务器工作负载时,这种基准测试如何帮助比较性能开销。
{"title":"Beyond Microbenchmarks: The SPEC-RG Vision for a Comprehensive Serverless Benchmark","authors":"Erwin Van Eyk, Joel Scheuner, Simon Eismann, Cristina L. Abad, A. Iosup","doi":"10.1145/3375555.3384381","DOIUrl":"https://doi.org/10.1145/3375555.3384381","url":null,"abstract":"Serverless computing services, such as Function-as-a-Service (FaaS), hold the attractive promise of a high level of abstraction and high performance, combined with the minimization of operational logic. Several large ecosystems of serverless platforms, both open- and closed-source, aim to realize this promise. Consequently, a lucrative market has emerged. However, the performance trade-offs of these systems are not well-understood. Moreover, it is exactly the high level of abstraction and the opaqueness of the operational-side that make performance evaluation studies of serverless platforms challenging. Learning from the history of IT platforms, we argue that a benchmark for serverless platforms could help address this challenge. We envision a comprehensive serverless benchmark, which we contrast to the narrow focus of prior work in this area. We argue that a comprehensive benchmark will need to take into account more than just runtime overhead, and include notions of cost, realistic workloads, more (open-source) platforms, and cloud integrations. Finally, we show through preliminary real-world experiments how such a benchmark can help compare the performance overhead when running a serverless workload on state-of-the-art platforms.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"245 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89168580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Performance Anomaly and Change Point Detection For Large-Scale System Management 大型系统管理中的性能异常和变化点检测
Igor A. Trubin
We begin by presenting a short overview of the classical Statistical Process Control based Anomaly Detection techniques and tools including Multivariate Adaptive Statistical Filtering, Statistical Exception Detection System, Exception Value meta-metric based Change Point Detection, control chart, business driven massive prediction and methods of using them to manage large-scale systems (with real examples of applying that to large financial companies) such as on-prem servers fleet, or massive clouds. Then we will turn to the presentation of modern techniques of anomaly and normality detection, such as deep learning and entropy-based anomalous pattern detections (also successfully tested against a large amount of real performance data of a large bank).
我们首先简要介绍了经典的基于统计过程控制的异常检测技术和工具,包括多元自适应统计过滤、统计异常检测系统、基于异常值元度量的变化点检测、控制图、业务驱动的大规模预测以及使用它们管理大型系统的方法(并将其应用于大型金融公司的真实示例),如本地服务器舰队、或者巨大的云。然后,我们将转向介绍异常和正常检测的现代技术,例如深度学习和基于熵的异常模式检测(也成功地针对大型银行的大量真实性能数据进行了测试)。
{"title":"Performance Anomaly and Change Point Detection For Large-Scale System Management","authors":"Igor A. Trubin","doi":"10.1145/3375555.3384934","DOIUrl":"https://doi.org/10.1145/3375555.3384934","url":null,"abstract":"We begin by presenting a short overview of the classical Statistical Process Control based Anomaly Detection techniques and tools including Multivariate Adaptive Statistical Filtering, Statistical Exception Detection System, Exception Value meta-metric based Change Point Detection, control chart, business driven massive prediction and methods of using them to manage large-scale systems (with real examples of applying that to large financial companies) such as on-prem servers fleet, or massive clouds. Then we will turn to the presentation of modern techniques of anomaly and normality detection, such as deep learning and entropy-based anomalous pattern detections (also successfully tested against a large amount of real performance data of a large bank).","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88316607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Performance Modeling of Speculative Execution for Cloud Applications 面向云应用推测执行的性能建模
Tommi Nylander, Johan Ruuskanen, Karl-Erik Årzén, M. Maggio
Interesting approaches to counteract performance variability within cloud datacenters include sending multiple request clones, either immediately or after a specified waiting time. In this paper we present a performance model of cloud applications that utilize the latter concept, known as speculative execution. We study the popular Join-Shortest-Queue load-balancing strategy under the processor sharing queuing discipline. Utilizing the near-synchronized service property of this setting, we model speculative execution using a simplified synchronized service model. Our model is approximate, but accurate enough to be useful even for high utilization scenarios. Furthermore, the model is valid for any, possibly empirical, inter-arrival and service time distributions. We present preliminary simulation results, showing the promise of our proposed model.
抵消云数据中心内性能可变性的有趣方法包括立即或在指定的等待时间之后发送多个请求克隆。在本文中,我们提出了一个利用后一种概念的云应用程序的性能模型,称为推测执行。研究了处理器共享排队原则下流行的join - short - queue负载均衡策略。利用此设置的近同步服务属性,我们使用简化的同步服务模型对推测执行进行建模。我们的模型是近似的,但是足够精确,即使对于高利用率的场景也是有用的。此外,该模型对任何可能是经验的到达间隔和服务时间分布都是有效的。我们给出了初步的仿真结果,显示了我们提出的模型的前景。
{"title":"Towards Performance Modeling of Speculative Execution for Cloud Applications","authors":"Tommi Nylander, Johan Ruuskanen, Karl-Erik Årzén, M. Maggio","doi":"10.1145/3375555.3384379","DOIUrl":"https://doi.org/10.1145/3375555.3384379","url":null,"abstract":"Interesting approaches to counteract performance variability within cloud datacenters include sending multiple request clones, either immediately or after a specified waiting time. In this paper we present a performance model of cloud applications that utilize the latter concept, known as speculative execution. We study the popular Join-Shortest-Queue load-balancing strategy under the processor sharing queuing discipline. Utilizing the near-synchronized service property of this setting, we model speculative execution using a simplified synchronized service model. Our model is approximate, but accurate enough to be useful even for high utilization scenarios. Furthermore, the model is valid for any, possibly empirical, inter-arrival and service time distributions. We present preliminary simulation results, showing the promise of our proposed model.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78476868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance Engineering for Microservices and Serverless Applications: The RADON Approach 微服务和无服务器应用的性能工程:RADON方法
Alim Ul Gias, A. Hoorn, Lulai Zhu, G. Casale, Thomas F. Düllmann, Michael Wurster
Microservices and serverless functions are becoming integral parts of modern cloud-based applications. Tailored performance engineering is needed for assuring that the applications meet their requirements for quality attributes such as timeliness, resource efficiency, and elasticity. A novel DevOps-based framework for developing microservices and serverless applications is being developed in the RADON project. RADON contributes to performance engineering by including novel approaches for modeling, deployment optimization, testing, and runtime management. This paper summarizes the contents of our tutorial presented at the 11th ACM/SPEC International Conference on Performance Engineering (ICPE).
微服务和无服务器功能正在成为现代基于云的应用程序的组成部分。需要定制性能工程,以确保应用程序满足其对质量属性(如及时性、资源效率和弹性)的需求。RADON项目正在开发一种新的基于devops的框架,用于开发微服务和无服务器应用程序。RADON通过包括建模、部署优化、测试和运行时管理的新方法,为性能工程做出了贡献。本文总结了本教程在第11届ACM/SPEC国际性能工程会议(ICPE)上发表的内容。
{"title":"Performance Engineering for Microservices and Serverless Applications: The RADON Approach","authors":"Alim Ul Gias, A. Hoorn, Lulai Zhu, G. Casale, Thomas F. Düllmann, Michael Wurster","doi":"10.1145/3375555.3383120","DOIUrl":"https://doi.org/10.1145/3375555.3383120","url":null,"abstract":"Microservices and serverless functions are becoming integral parts of modern cloud-based applications. Tailored performance engineering is needed for assuring that the applications meet their requirements for quality attributes such as timeliness, resource efficiency, and elasticity. A novel DevOps-based framework for developing microservices and serverless applications is being developed in the RADON project. RADON contributes to performance engineering by including novel approaches for modeling, deployment optimization, testing, and runtime management. This paper summarizes the contents of our tutorial presented at the 11th ACM/SPEC International Conference on Performance Engineering (ICPE).","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85414629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Companion of the 2018 ACM/SPEC International Conference on Performance Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1