{"title":"使用TeaStore描述微服务的扩展性能","authors":"Sriyash Caculo, K. Lahiri, Subramaniam Kalambur","doi":"10.1109/IISWC50251.2020.00014","DOIUrl":null,"url":null,"abstract":"Cloud-based applications architected using microservices are becoming increasingly common. While recent work has studied how to optimize the performance of these applications at the data-center level, comparatively little is known about how these services utilize end-server compute resources. Major advances have been made in recent years in terms of the compute density offered by cloud servers, thanks to the emergence of mainstream, high-core count CPU designs. Consequently, it has become equally important to understand the ability of microservices to “scale up” within a server and make effective use of available resources. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Characterizing the Scale-Up Performance of Microservices using TeaStore\",\"authors\":\"Sriyash Caculo, K. Lahiri, Subramaniam Kalambur\",\"doi\":\"10.1109/IISWC50251.2020.00014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud-based applications architected using microservices are becoming increasingly common. While recent work has studied how to optimize the performance of these applications at the data-center level, comparatively little is known about how these services utilize end-server compute resources. Major advances have been made in recent years in terms of the compute density offered by cloud servers, thanks to the emergence of mainstream, high-core count CPU designs. Consequently, it has become equally important to understand the ability of microservices to “scale up” within a server and make effective use of available resources. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors.\",\"PeriodicalId\":365983,\"journal\":{\"name\":\"2020 IEEE International Symposium on Workload Characterization (IISWC)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Symposium on Workload Characterization (IISWC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IISWC50251.2020.00014\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Workload Characterization (IISWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISWC50251.2020.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Characterizing the Scale-Up Performance of Microservices using TeaStore
Cloud-based applications architected using microservices are becoming increasingly common. While recent work has studied how to optimize the performance of these applications at the data-center level, comparatively little is known about how these services utilize end-server compute resources. Major advances have been made in recent years in terms of the compute density offered by cloud servers, thanks to the emergence of mainstream, high-core count CPU designs. Consequently, it has become equally important to understand the ability of microservices to “scale up” within a server and make effective use of available resources. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors.