{"title":"ICPE '21: ACM/SPEC International Conference on Performance Engineering, Virtual Event, France, April 19-21, 2021","authors":"","doi":"10.1145/3427921","DOIUrl":"https://doi.org/10.1145/3427921","url":null,"abstract":"","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77019332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ICPE '21: ACM/SPEC International Conference on Performance Engineering, Virtual Event, France, April 19-21, 2021, Companion Volume","authors":"","doi":"10.1145/3447545","DOIUrl":"https://doi.org/10.1145/3447545","url":null,"abstract":"","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90908236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulia Guidi, Marquita Ellis, A. Buluç, K. Yelick, D. Culler
Can cloud computing infrastructures provide HPC-competitive performance for scientific applications broadly? Despite prolific related literature, this question remains open. Answers are crucial for designing future systems and democratizing high-performance computing. We present a multi-level approach to investigate the performance gap between HPC and cloud computing, isolating different variables that contribute to this gap. Our experiments are divided into (i) hardware and system microbenchmarks and (ii) user application proxies. The results show that today's high-end cloud computing can deliver HPC-competitive performance not only for computationally intensive applications, but also for memory- and communication-intensive applications -- at least at modest scales -- thanks to the high-speed memory systems and interconnects and dedicated batch scheduling now available on some cloud platforms.
{"title":"10 Years Later: Cloud Computing is Closing the Performance Gap","authors":"Giulia Guidi, Marquita Ellis, A. Buluç, K. Yelick, D. Culler","doi":"10.1145/3447545.3451183","DOIUrl":"https://doi.org/10.1145/3447545.3451183","url":null,"abstract":"Can cloud computing infrastructures provide HPC-competitive performance for scientific applications broadly? Despite prolific related literature, this question remains open. Answers are crucial for designing future systems and democratizing high-performance computing. We present a multi-level approach to investigate the performance gap between HPC and cloud computing, isolating different variables that contribute to this gap. Our experiments are divided into (i) hardware and system microbenchmarks and (ii) user application proxies. The results show that today's high-end cloud computing can deliver HPC-competitive performance not only for computationally intensive applications, but also for memory- and communication-intensive applications -- at least at modest scales -- thanks to the high-speed memory systems and interconnects and dedicated batch scheduling now available on some cloud platforms.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82026730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The proliferation of big data technology and faster computing systems led to pervasions of AI based solutions in our life. There is need to understand how to benchmark systems used to build AI based solutions that have a complex pipeline of pre-processing, statistical analysis, machine learning and deep learning on data to build prediction models. Solution architects, engineers and researchers may use open-source technology or proprietary systems based on desired performance requirements. The performance metrics may be data pre-processing time, model training time and model inference time. We do not see a single benchmark answering all questions of solution architects and researchers. This tutorial covers both practical and research questions on relevant Big Data and Analytics benchmarks.
{"title":"Tutorial on Benchmarking Big Data Analytics Systems","authors":"Todor Ivanov, Rekha Singhal","doi":"10.1145/3375555.3383121","DOIUrl":"https://doi.org/10.1145/3375555.3383121","url":null,"abstract":"The proliferation of big data technology and faster computing systems led to pervasions of AI based solutions in our life. There is need to understand how to benchmark systems used to build AI based solutions that have a complex pipeline of pre-processing, statistical analysis, machine learning and deep learning on data to build prediction models. Solution architects, engineers and researchers may use open-source technology or proprietary systems based on desired performance requirements. The performance metrics may be data pre-processing time, model training time and model inference time. We do not see a single benchmark answering all questions of solution architects and researchers. This tutorial covers both practical and research questions on relevant Big Data and Analytics benchmarks.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"37 9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77897813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is my great pleasure to welcome you to WOSP-C 2020, the Workshop on Challenges and Opportunities in Large Scale Performance. Our theme this year relates to the use of analytics to interpret system performance and resource usage measurements that can now be gathered rapidly on a large scale. Our four invited speakers hail from industry. All three presentations in the first session and the last presentation in the second session deal with modeling and measurement to automate the making of decisions about system configuration or the recognition of anomalies, especially for cloud-based systems. The other two papers in the second session address measurement and modeling issues at a granular level. These topics are highly relevant to the issues systems architects and other stakeholders face when deploying systems in the cloud, because doing so need not guarantee good performance. The recent emergence of the ability to gather vast numbers of performance and resource usage measurements facilitates the informed choice of target cloud platforms and their configurations. The presentations in this workshop deal with various aspects of how this can be achieved.
{"title":"WOSP-C 2020: Workshop on Challenges and Opportunities in Large-Scale Performance: Welcoming Remarks","authors":"A. Bondi","doi":"10.1145/3375555.3384939","DOIUrl":"https://doi.org/10.1145/3375555.3384939","url":null,"abstract":"It is my great pleasure to welcome you to WOSP-C 2020, the Workshop on Challenges and Opportunities in Large Scale Performance. Our theme this year relates to the use of analytics to interpret system performance and resource usage measurements that can now be gathered rapidly on a large scale. Our four invited speakers hail from industry. All three presentations in the first session and the last presentation in the second session deal with modeling and measurement to automate the making of decisions about system configuration or the recognition of anomalies, especially for cloud-based systems. The other two papers in the second session address measurement and modeling issues at a granular level. These topics are highly relevant to the issues systems architects and other stakeholders face when deploying systems in the cloud, because doing so need not guarantee good performance. The recent emergence of the ability to gather vast numbers of performance and resource usage measurements facilitates the informed choice of target cloud platforms and their configurations. The presentations in this workshop deal with various aspects of how this can be achieved.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90618047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this extended abstract, we provide an outline of the presentation planned for WOSP-C 2020. The goal of the presentation is to provide an overview of the challenges and approaches for automated scalability assessment in the context of DevOps and microservices. The focus of this presentation is on approaches that employ automated identification of performance problems because these approaches can leverage performance anti-pattern[5] detection technology. In addition, we envision extending the approach to recommend component refactoring. In our previous work[1,2] we have designed a methodology and associated tool support for the automated scalability assessment of micro-service architectures, which included the automation of all the steps required for scalability assessment. The presentation starts with an introduction to dependability, operational Profile Data, and DevOps. Specifically, we provide an overview of the state of the art in continuous performance monitoring technologies[4] that are used for obtaining operational profile data using APM tools. We then present an overview of selected approaches for production and performance testing based on the application monitoring tool (PPTAM) as introduced in [1,2]. The presentation concludes by outlining a vision for automated performance anti-pattern[5] detection. Specifically, we present the approach introduced for automated anti-pattern detection based on load testing results and profiling introduced in[6] and provide recommendations for future research.
{"title":"Automated Scalability Assessment in DevOps Environments","authors":"Alberto Avritzer","doi":"10.1145/3375555.3384936","DOIUrl":"https://doi.org/10.1145/3375555.3384936","url":null,"abstract":"In this extended abstract, we provide an outline of the presentation planned for WOSP-C 2020. The goal of the presentation is to provide an overview of the challenges and approaches for automated scalability assessment in the context of DevOps and microservices. The focus of this presentation is on approaches that employ automated identification of performance problems because these approaches can leverage performance anti-pattern[5] detection technology. In addition, we envision extending the approach to recommend component refactoring. In our previous work[1,2] we have designed a methodology and associated tool support for the automated scalability assessment of micro-service architectures, which included the automation of all the steps required for scalability assessment. The presentation starts with an introduction to dependability, operational Profile Data, and DevOps. Specifically, we provide an overview of the state of the art in continuous performance monitoring technologies[4] that are used for obtaining operational profile data using APM tools. We then present an overview of selected approaches for production and performance testing based on the application monitoring tool (PPTAM) as introduced in [1,2]. The presentation concludes by outlining a vision for automated performance anti-pattern[5] detection. Specifically, we present the approach introduced for automated anti-pattern detection based on load testing results and profiling introduced in[6] and provide recommendations for future research.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83353610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maryam Elahi, Joel van Egmond, Mea Wang, C. Williamson, Jean-Francois Amiot
Large-scale not-for-profit Internet Service Providers (ISPs), such as National Research and Education Networks (NRENs) often have significant amounts of underutilized bandwidth because they provision their network capacity for the rare event that all clients utilize their purchased bandwidth. However, traffic policers are still applied to enforce committed purchase rates and avoid congestion. We present the design and initial evaluation of an SDN/OpenFlow solution that maximizes the network link utilization by user-defined fair allocation of spare bandwidth, while guaranteeing minimum bandwidth for each client.
{"title":"Poster Abstract: Fair and Efficient Dynamic Bandwidth Allocation with OpenFlow","authors":"Maryam Elahi, Joel van Egmond, Mea Wang, C. Williamson, Jean-Francois Amiot","doi":"10.1145/3375555.3383587","DOIUrl":"https://doi.org/10.1145/3375555.3383587","url":null,"abstract":"Large-scale not-for-profit Internet Service Providers (ISPs), such as National Research and Education Networks (NRENs) often have significant amounts of underutilized bandwidth because they provision their network capacity for the rare event that all clients utilize their purchased bandwidth. However, traffic policers are still applied to enforce committed purchase rates and avoid congestion. We present the design and initial evaluation of an SDN/OpenFlow solution that maximizes the network link utilization by user-defined fair allocation of spare bandwidth, while guaranteeing minimum bandwidth for each client.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84419426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serverless computing is steadily becoming the implementation paradigm of choice for a variety of applications, from data analytics to web applications, as it addresses the main problems with serverfull and monolithic architecture. In particular, it abstracts away resource provisioning and infrastructure management, enabling developers to focus on the logic of the program instead of worrying about resource management which will be handled by cloud providers. In this paper, we consider a document processing system used in FinTech as a case study and describe the migration journey from a monolithic architecture to a serverless architecture. Our evaluation results show that the serverless implementation significantly improves performance while resulting in only a marginal increase in cost.
{"title":"Migrating from Monolithic to Serverless: A FinTech Case Study","authors":"Alireza Goli, Omid Hajihassani, Hamzeh Khazaei, Omid Ardakanian, Moe Rashidi, T. Dauphinee","doi":"10.1145/3375555.3384380","DOIUrl":"https://doi.org/10.1145/3375555.3384380","url":null,"abstract":"Serverless computing is steadily becoming the implementation paradigm of choice for a variety of applications, from data analytics to web applications, as it addresses the main problems with serverfull and monolithic architecture. In particular, it abstracts away resource provisioning and infrastructure management, enabling developers to focus on the logic of the program instead of worrying about resource management which will be handled by cloud providers. In this paper, we consider a document processing system used in FinTech as a case study and describe the migration journey from a monolithic architecture to a serverless architecture. Our evaluation results show that the serverless implementation significantly improves performance while resulting in only a marginal increase in cost.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"158 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73495269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fog computing has been regarded as an ideal platform for distributed and diverse IoT applications. Fog environment consists of a network of fog nodes and IoT applications are composed of containerized microservices communicating with each other. Distribution and optimization of containerized IoT applications in the fog environment is a recent line of research. Our work took Kubernetes as an orchestrator that instantiates, manages, and terminates containers in multiple-host environments for IoT applications, where each host acts as a fog node. This paper demonstrates the industrial feasibility and practicality of deploying and managing containerized IoT applications on real devices (raspberry pis and PCs) by utilizing commercial software tools (Docker, WeaveNet). The demonstration will show that the application's functionality is not affected by the distribution of communicating microservices on different nodes.
{"title":"Kubernetes: Towards Deployment of Distributed IoT Applications in Fog Computing","authors":"Paridhika Kayal","doi":"10.1145/3375555.3383585","DOIUrl":"https://doi.org/10.1145/3375555.3383585","url":null,"abstract":"Fog computing has been regarded as an ideal platform for distributed and diverse IoT applications. Fog environment consists of a network of fog nodes and IoT applications are composed of containerized microservices communicating with each other. Distribution and optimization of containerized IoT applications in the fog environment is a recent line of research. Our work took Kubernetes as an orchestrator that instantiates, manages, and terminates containers in multiple-host environments for IoT applications, where each host acts as a fog node. This paper demonstrates the industrial feasibility and practicality of deploying and managing containerized IoT applications on real devices (raspberry pis and PCs) by utilizing commercial software tools (Docker, WeaveNet). The demonstration will show that the application's functionality is not affected by the distribution of communicating microservices on different nodes.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75190275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software developers use collection data structures extensively and are often faced with the task of picking which collection to use. Choosing an inappropriate collection can have major negative impact on runtime performance. However, choosing the right collection can be difficult since developers are faced with many possibilities, which often appear functionally equivalent. One approach to assist developers in this decision-making process is to micro-benchmark data-structures in order to provide performance insights. In this paper, we present results from experiments on Java collections (maps, lists, and sets) using our tool JBrainy, which synthesises micro-benchmarks with sequences of random method calls. We compare our results to the results of a previous experiment on Java collections that uses a micro-benchmarking approach focused on single methods. Our results support previous results for lists, in that we found ArrayList to yield the best running time in 90% of our benchmarks. For sets, we found LinkedHashSet to yield the best performance in 78% of the benchmarks. In contrast to previous results, we found TreeMap and LinkedHashMap to yield better runtime performance than HashMap in 84% of cases.
{"title":"JBrainy: Micro-benchmarking Java Collections with Interference","authors":"N. Couderc, Emma Söderberg, Christoph Reichenbach","doi":"10.1145/3375555.3383760","DOIUrl":"https://doi.org/10.1145/3375555.3383760","url":null,"abstract":"Software developers use collection data structures extensively and are often faced with the task of picking which collection to use. Choosing an inappropriate collection can have major negative impact on runtime performance. However, choosing the right collection can be difficult since developers are faced with many possibilities, which often appear functionally equivalent. One approach to assist developers in this decision-making process is to micro-benchmark data-structures in order to provide performance insights. In this paper, we present results from experiments on Java collections (maps, lists, and sets) using our tool JBrainy, which synthesises micro-benchmarks with sequences of random method calls. We compare our results to the results of a previous experiment on Java collections that uses a micro-benchmarking approach focused on single methods. Our results support previous results for lists, in that we found ArrayList to yield the best running time in 90% of our benchmarks. For sets, we found LinkedHashSet to yield the best performance in 78% of the benchmarks. In contrast to previous results, we found TreeMap and LinkedHashMap to yield better runtime performance than HashMap in 84% of cases.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91228021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}