Organizations want to take advantage of the flexibility and scalability of Cloud platforms. By migrating to the Cloud, they hope to develop and implement new applications faster with lower cost. Amazon AWS, Microsoft Azure, Google, IBM, Oracle and others Cloud providers support different DBMS like Snowflake, Redshift, Teradata Vantage, and others. These platforms have different architectures, mechanisms of allocation and management of resources, and levels of sophistication of DBMS optimizers which affect performance, scalability and cost. As a result, the response time, CPU Service Time and the number of I/Os for the same query, accessing the similar table in the Cloud could be significantly different than On Prem. In order to select the appropriate Cloud platform as a first step we perform a Workload Characterization for On Prem Data Warehouse. Each Data Warehouse workload represents a specific line of business and includes activity of many users generating concurrently simple and complex queries accessing data from different tables. Each workload has different demands for resources and different Response Time and Throughput Service Level Goals. In this presentation we will review results of the workload characterization for an On Prem Data Warehouse environment. During the second step we collected measurement data for standard TPC-DS benchmark tests performed in AWS Vantage, Redshift and Snowflake Cloud platform for different sizes of the data sets and different number of concurrent users. During the third step we used the results of the workload characterization and measurement data collected during the benchmark to modify BEZNext On Prem Closed Queueing model to model individual Clouds. And finally, during the fourth step we used our Model to take into consideration differences in concurrency, priorities and resource allocation to different workloads. BEZNext optimization algorithms incorporating Graduate search mechanism are used to find the AWS instance type and minimum number of instances which will be required to meet SLGs for each of the workloads. Publicly available information about the cost of the different AWS instances is used to predict the cost of supporting workloads in the Cloud month by month during next 12 months.
{"title":"How to Apply Modeling to Compare Options and Select the Appropriate Cloud Platform","authors":"B. Zibitsker, Alex Lupersolsky","doi":"10.1145/3375555.3384938","DOIUrl":"https://doi.org/10.1145/3375555.3384938","url":null,"abstract":"Organizations want to take advantage of the flexibility and scalability of Cloud platforms. By migrating to the Cloud, they hope to develop and implement new applications faster with lower cost. Amazon AWS, Microsoft Azure, Google, IBM, Oracle and others Cloud providers support different DBMS like Snowflake, Redshift, Teradata Vantage, and others. These platforms have different architectures, mechanisms of allocation and management of resources, and levels of sophistication of DBMS optimizers which affect performance, scalability and cost. As a result, the response time, CPU Service Time and the number of I/Os for the same query, accessing the similar table in the Cloud could be significantly different than On Prem. In order to select the appropriate Cloud platform as a first step we perform a Workload Characterization for On Prem Data Warehouse. Each Data Warehouse workload represents a specific line of business and includes activity of many users generating concurrently simple and complex queries accessing data from different tables. Each workload has different demands for resources and different Response Time and Throughput Service Level Goals. In this presentation we will review results of the workload characterization for an On Prem Data Warehouse environment. During the second step we collected measurement data for standard TPC-DS benchmark tests performed in AWS Vantage, Redshift and Snowflake Cloud platform for different sizes of the data sets and different number of concurrent users. During the third step we used the results of the workload characterization and measurement data collected during the benchmark to modify BEZNext On Prem Closed Queueing model to model individual Clouds. And finally, during the fourth step we used our Model to take into consideration differences in concurrency, priorities and resource allocation to different workloads. BEZNext optimization algorithms incorporating Graduate search mechanism are used to find the AWS instance type and minimum number of instances which will be required to meet SLGs for each of the workloads. Publicly available information about the cost of the different AWS instances is used to predict the cost of supporting workloads in the Cloud month by month during next 12 months.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"83 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72555523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This report is prompted by some recent experience with building performance models from kernel traces recorded by LTTng, a tracer that is part of Linux, and by observing other researchers who are analyzing performance issues directly from the traces. It briefly distinguishes the scope of the two approaches, regarding the model as an abstraction of the trace, and the model-building as a form of machine learning. For model building it then discusses how various limitations of the kernel trace information limit the model and its capabilities and how the limitations might be overcome by using additional information of different kinds. The overall perspective is a tradeoff between effort and model capability.
{"title":"Issues Arising in Using Kernel Traces to Make a Performance Model","authors":"C. Woodside, S. Tjandra, Gabriel Seyoum","doi":"10.1145/3375555.3384937","DOIUrl":"https://doi.org/10.1145/3375555.3384937","url":null,"abstract":"This report is prompted by some recent experience with building performance models from kernel traces recorded by LTTng, a tracer that is part of Linux, and by observing other researchers who are analyzing performance issues directly from the traces. It briefly distinguishes the scope of the two approaches, regarding the model as an abstraction of the trace, and the model-building as a form of machine learning. For model building it then discusses how various limitations of the kernel trace information limit the model and its capabilities and how the limitations might be overcome by using additional information of different kinds. The overall perspective is a tradeoff between effort and model capability.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90407532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norbert Schmitt, James Bucek, K. Lange, Samuel Kounev
The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software themselves must become more energy-efficient. It is the software that controls the hardware to a considerable degree. In this work-in-progress paper, we present a first analysis of how compiler optimizations can influence energy efficiency. We base our analysis on workloads of the SPEC CPU 2017 benchmark. With 43 benchmarks from different domains, including integer and floating-point heavy computations executed on a state-of-the-art server system for cloud applications, SPEC CPU 2017 offers a representative selection of workloads.
云服务的增长导致越来越多的数据中心变得越来越大,并消耗大量的电力。为了提高能源效率,实际的服务器设备和软件本身都必须变得更加节能。软件在很大程度上控制着硬件。在这篇正在进行的论文中,我们首次分析了编译器优化如何影响能源效率。我们的分析基于SPEC CPU 2017基准的工作负载。SPEC CPU 2017提供了来自不同领域的43个基准测试,包括在最先进的云应用服务器系统上执行的整数和浮点繁重计算,提供了具有代表性的工作负载选择。
{"title":"Energy Efficiency Analysis of Compiler Optimizations on the SPEC CPU 2017 Benchmark Suite","authors":"Norbert Schmitt, James Bucek, K. Lange, Samuel Kounev","doi":"10.1145/3375555.3383759","DOIUrl":"https://doi.org/10.1145/3375555.3383759","url":null,"abstract":"The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software themselves must become more energy-efficient. It is the software that controls the hardware to a considerable degree. In this work-in-progress paper, we present a first analysis of how compiler optimizations can influence energy efficiency. We base our analysis on workloads of the SPEC CPU 2017 benchmark. With 43 benchmarks from different domains, including integer and floating-point heavy computations executed on a state-of-the-art server system for cloud applications, SPEC CPU 2017 offers a representative selection of workloads.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87623382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vijayshree Vijayshree, Markus Frank, Steffen Becker
In the software development process, model transformation is increasingly assimilated. However, systems being developed with model transformation sometimes grow in size and become complex. Meanwhile, the performance of model transformation tends to decrease. Hence, performance is an important quality of model transformation. According to current research model transformation performance focuses on optimising the engines internally. However, there exists no research activities to support transformation engineer to identify performance bottleneck in the transformation rules and hence, to predict the overall performance. In this paper we vision our aim at providing an approach of monitoring and profiling to identify the root cause of performance issues in the transformation rules and to predict the performance of model transformation. This will enable software engineers to systematically identify performance issues as well as predict the performance of model transformation.
{"title":"Extended Abstract of Performance Analysis and Prediction of Model Transformation","authors":"Vijayshree Vijayshree, Markus Frank, Steffen Becker","doi":"10.1145/3358960.3383769","DOIUrl":"https://doi.org/10.1145/3358960.3383769","url":null,"abstract":"In the software development process, model transformation is increasingly assimilated. However, systems being developed with model transformation sometimes grow in size and become complex. Meanwhile, the performance of model transformation tends to decrease. Hence, performance is an important quality of model transformation. According to current research model transformation performance focuses on optimising the engines internally. However, there exists no research activities to support transformation engineer to identify performance bottleneck in the transformation rules and hence, to predict the overall performance. In this paper we vision our aim at providing an approach of monitoring and profiling to identify the root cause of performance issues in the transformation rules and to predict the performance of model transformation. This will enable software engineers to systematically identify performance issues as well as predict the performance of model transformation.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79297418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
General matrix-matrix multiplication (GEMM) is a critical operation in many application domains [1]. It is a central building block of deep learning algorithms, computer graphics operations, and other linear algebra dominated applications. Due to this, GEMM has been extensively studied and optimized, resulting in libraries of exceptional quality such as BLAS, Eigen, and other platform specific implementations such as MKL (Intel) and ESSL (IBM) [2,3]. Despite these successes, the GeMM idiom continues to be re-implemented by programmers, without consideration for the intricacies already accounted for by the aforementioned libraries. To this end, this project aims to provide transparent adoption of high-performance implementations of GEMM through a novel optimization pass implemented within the LLVM framework using idiom recognition techniques[4]. Sub-optimal implementations of GEMM are replaced by equivalent library calls.
{"title":"Acceleration Opportunities in Linear Algebra Applications via Idiom Recognition","authors":"J. P. L. Carvalho, Braedy Kuzma, G. Araújo","doi":"10.1145/3375555.3383586","DOIUrl":"https://doi.org/10.1145/3375555.3383586","url":null,"abstract":"General matrix-matrix multiplication (GEMM) is a critical operation in many application domains [1]. It is a central building block of deep learning algorithms, computer graphics operations, and other linear algebra dominated applications. Due to this, GEMM has been extensively studied and optimized, resulting in libraries of exceptional quality such as BLAS, Eigen, and other platform specific implementations such as MKL (Intel) and ESSL (IBM) [2,3]. Despite these successes, the GeMM idiom continues to be re-implemented by programmers, without consideration for the intricacies already accounted for by the aforementioned libraries. To this end, this project aims to provide transparent adoption of high-performance implementations of GEMM through a novel optimization pass implemented within the LLVM framework using idiom recognition techniques[4]. Sub-optimal implementations of GEMM are replaced by equivalent library calls.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89916387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dheeraj Chahal, Ravi Ojha, Sharod Roy Choudhury, M. Nambiar
Inference is the production stage of machine learning workflow in which a trained model is used to infer or predict with real world data. A recommendation system improves customer experience by displaying most relevant items based on historical behavior of a customer. Machine learning models built for recommendation systems are deployed either on-premise or migrated to a cloud for inference in real time or batch. A recommendation system should be cost effective while honoring service level agreements (SLAs). In this work we discuss on-premise implementation of our recommendation system called iPrescribe. We show a methodology to migrate on-premise implementation of recommendation system to a cloud using ML workflow. We also present our study on performance of recommendation system model when deployed on different types of virtual instances.
{"title":"Migrating a Recommendation System to Cloud Using ML Workflow","authors":"Dheeraj Chahal, Ravi Ojha, Sharod Roy Choudhury, M. Nambiar","doi":"10.1145/3375555.3384423","DOIUrl":"https://doi.org/10.1145/3375555.3384423","url":null,"abstract":"Inference is the production stage of machine learning workflow in which a trained model is used to infer or predict with real world data. A recommendation system improves customer experience by displaying most relevant items based on historical behavior of a customer. Machine learning models built for recommendation systems are deployed either on-premise or migrated to a cloud for inference in real time or batch. A recommendation system should be cost effective while honoring service level agreements (SLAs). In this work we discuss on-premise implementation of our recommendation system called iPrescribe. We show a methodology to migrate on-premise implementation of recommendation system to a cloud using ML workflow. We also present our study on performance of recommendation system model when deployed on different types of virtual instances.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"2017 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87787926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erwin Van Eyk, Joel Scheuner, Simon Eismann, Cristina L. Abad, A. Iosup
Serverless computing services, such as Function-as-a-Service (FaaS), hold the attractive promise of a high level of abstraction and high performance, combined with the minimization of operational logic. Several large ecosystems of serverless platforms, both open- and closed-source, aim to realize this promise. Consequently, a lucrative market has emerged. However, the performance trade-offs of these systems are not well-understood. Moreover, it is exactly the high level of abstraction and the opaqueness of the operational-side that make performance evaluation studies of serverless platforms challenging. Learning from the history of IT platforms, we argue that a benchmark for serverless platforms could help address this challenge. We envision a comprehensive serverless benchmark, which we contrast to the narrow focus of prior work in this area. We argue that a comprehensive benchmark will need to take into account more than just runtime overhead, and include notions of cost, realistic workloads, more (open-source) platforms, and cloud integrations. Finally, we show through preliminary real-world experiments how such a benchmark can help compare the performance overhead when running a serverless workload on state-of-the-art platforms.
{"title":"Beyond Microbenchmarks: The SPEC-RG Vision for a Comprehensive Serverless Benchmark","authors":"Erwin Van Eyk, Joel Scheuner, Simon Eismann, Cristina L. Abad, A. Iosup","doi":"10.1145/3375555.3384381","DOIUrl":"https://doi.org/10.1145/3375555.3384381","url":null,"abstract":"Serverless computing services, such as Function-as-a-Service (FaaS), hold the attractive promise of a high level of abstraction and high performance, combined with the minimization of operational logic. Several large ecosystems of serverless platforms, both open- and closed-source, aim to realize this promise. Consequently, a lucrative market has emerged. However, the performance trade-offs of these systems are not well-understood. Moreover, it is exactly the high level of abstraction and the opaqueness of the operational-side that make performance evaluation studies of serverless platforms challenging. Learning from the history of IT platforms, we argue that a benchmark for serverless platforms could help address this challenge. We envision a comprehensive serverless benchmark, which we contrast to the narrow focus of prior work in this area. We argue that a comprehensive benchmark will need to take into account more than just runtime overhead, and include notions of cost, realistic workloads, more (open-source) platforms, and cloud integrations. Finally, we show through preliminary real-world experiments how such a benchmark can help compare the performance overhead when running a serverless workload on state-of-the-art platforms.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"245 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89168580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We begin by presenting a short overview of the classical Statistical Process Control based Anomaly Detection techniques and tools including Multivariate Adaptive Statistical Filtering, Statistical Exception Detection System, Exception Value meta-metric based Change Point Detection, control chart, business driven massive prediction and methods of using them to manage large-scale systems (with real examples of applying that to large financial companies) such as on-prem servers fleet, or massive clouds. Then we will turn to the presentation of modern techniques of anomaly and normality detection, such as deep learning and entropy-based anomalous pattern detections (also successfully tested against a large amount of real performance data of a large bank).
{"title":"Performance Anomaly and Change Point Detection For Large-Scale System Management","authors":"Igor A. Trubin","doi":"10.1145/3375555.3384934","DOIUrl":"https://doi.org/10.1145/3375555.3384934","url":null,"abstract":"We begin by presenting a short overview of the classical Statistical Process Control based Anomaly Detection techniques and tools including Multivariate Adaptive Statistical Filtering, Statistical Exception Detection System, Exception Value meta-metric based Change Point Detection, control chart, business driven massive prediction and methods of using them to manage large-scale systems (with real examples of applying that to large financial companies) such as on-prem servers fleet, or massive clouds. Then we will turn to the presentation of modern techniques of anomaly and normality detection, such as deep learning and entropy-based anomalous pattern detections (also successfully tested against a large amount of real performance data of a large bank).","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88316607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tommi Nylander, Johan Ruuskanen, Karl-Erik Årzén, M. Maggio
Interesting approaches to counteract performance variability within cloud datacenters include sending multiple request clones, either immediately or after a specified waiting time. In this paper we present a performance model of cloud applications that utilize the latter concept, known as speculative execution. We study the popular Join-Shortest-Queue load-balancing strategy under the processor sharing queuing discipline. Utilizing the near-synchronized service property of this setting, we model speculative execution using a simplified synchronized service model. Our model is approximate, but accurate enough to be useful even for high utilization scenarios. Furthermore, the model is valid for any, possibly empirical, inter-arrival and service time distributions. We present preliminary simulation results, showing the promise of our proposed model.
抵消云数据中心内性能可变性的有趣方法包括立即或在指定的等待时间之后发送多个请求克隆。在本文中,我们提出了一个利用后一种概念的云应用程序的性能模型,称为推测执行。研究了处理器共享排队原则下流行的join - short - queue负载均衡策略。利用此设置的近同步服务属性,我们使用简化的同步服务模型对推测执行进行建模。我们的模型是近似的,但是足够精确,即使对于高利用率的场景也是有用的。此外,该模型对任何可能是经验的到达间隔和服务时间分布都是有效的。我们给出了初步的仿真结果,显示了我们提出的模型的前景。
{"title":"Towards Performance Modeling of Speculative Execution for Cloud Applications","authors":"Tommi Nylander, Johan Ruuskanen, Karl-Erik Årzén, M. Maggio","doi":"10.1145/3375555.3384379","DOIUrl":"https://doi.org/10.1145/3375555.3384379","url":null,"abstract":"Interesting approaches to counteract performance variability within cloud datacenters include sending multiple request clones, either immediately or after a specified waiting time. In this paper we present a performance model of cloud applications that utilize the latter concept, known as speculative execution. We study the popular Join-Shortest-Queue load-balancing strategy under the processor sharing queuing discipline. Utilizing the near-synchronized service property of this setting, we model speculative execution using a simplified synchronized service model. Our model is approximate, but accurate enough to be useful even for high utilization scenarios. Furthermore, the model is valid for any, possibly empirical, inter-arrival and service time distributions. We present preliminary simulation results, showing the promise of our proposed model.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78476868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alim Ul Gias, A. Hoorn, Lulai Zhu, G. Casale, Thomas F. Düllmann, Michael Wurster
Microservices and serverless functions are becoming integral parts of modern cloud-based applications. Tailored performance engineering is needed for assuring that the applications meet their requirements for quality attributes such as timeliness, resource efficiency, and elasticity. A novel DevOps-based framework for developing microservices and serverless applications is being developed in the RADON project. RADON contributes to performance engineering by including novel approaches for modeling, deployment optimization, testing, and runtime management. This paper summarizes the contents of our tutorial presented at the 11th ACM/SPEC International Conference on Performance Engineering (ICPE).
{"title":"Performance Engineering for Microservices and Serverless Applications: The RADON Approach","authors":"Alim Ul Gias, A. Hoorn, Lulai Zhu, G. Casale, Thomas F. Düllmann, Michael Wurster","doi":"10.1145/3375555.3383120","DOIUrl":"https://doi.org/10.1145/3375555.3383120","url":null,"abstract":"Microservices and serverless functions are becoming integral parts of modern cloud-based applications. Tailored performance engineering is needed for assuring that the applications meet their requirements for quality attributes such as timeliness, resource efficiency, and elasticity. A novel DevOps-based framework for developing microservices and serverless applications is being developed in the RADON project. RADON contributes to performance engineering by including novel approaches for modeling, deployment optimization, testing, and runtime management. This paper summarizes the contents of our tutorial presented at the 11th ACM/SPEC International Conference on Performance Engineering (ICPE).","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85414629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}