{"title":"在无服务器计算中揭开函数内并行的神秘面纱","authors":"M. Kiener, Mohak Chadha, M. Gerndt","doi":"10.1145/3493651.3493672","DOIUrl":null,"url":null,"abstract":"Serverless computing offers a pay-per-use model with high elasticity and automatic scaling for a wide range of applications. Since cloud providers abstract most of the underlying infrastructure, these services work similarly to black-boxes. As a result, users can influence the resources allocated to their functions, but might not be aware that they have to parallelize them to profit from the additionally allocated virtual CPUs (vCPUs). In this paper, we analyze the impact of parallelization within a single function and container instance for AWS Lambda, Google Cloud Functions (GCF), and Google Cloud Run (GCR). We focus on compute-intensive workloads since they benefit greatly from parallelization. Furthermore, we investigate the correlation between the number of allocated CPU cores and vCPUs in serverless environments. Our results show that the number of available cores to a function/container instance does not always equal the number of allocated vCPUs. By parallelizing serverless workloads, we observed cost savings up to 81% for AWS Lambda, 49% for GCF, and 69.8% for GCR.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Towards Demystifying Intra-Function Parallelism in Serverless Computing\",\"authors\":\"M. Kiener, Mohak Chadha, M. Gerndt\",\"doi\":\"10.1145/3493651.3493672\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Serverless computing offers a pay-per-use model with high elasticity and automatic scaling for a wide range of applications. Since cloud providers abstract most of the underlying infrastructure, these services work similarly to black-boxes. As a result, users can influence the resources allocated to their functions, but might not be aware that they have to parallelize them to profit from the additionally allocated virtual CPUs (vCPUs). In this paper, we analyze the impact of parallelization within a single function and container instance for AWS Lambda, Google Cloud Functions (GCF), and Google Cloud Run (GCR). We focus on compute-intensive workloads since they benefit greatly from parallelization. Furthermore, we investigate the correlation between the number of allocated CPU cores and vCPUs in serverless environments. Our results show that the number of available cores to a function/container instance does not always equal the number of allocated vCPUs. By parallelizing serverless workloads, we observed cost savings up to 81% for AWS Lambda, 49% for GCF, and 69.8% for GCR.\",\"PeriodicalId\":270470,\"journal\":{\"name\":\"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3493651.3493672\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3493651.3493672","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
摘要
无服务器计算为广泛的应用程序提供了具有高弹性和自动扩展的按使用付费模型。由于云提供商抽象了大部分底层基础设施,因此这些服务的工作方式类似于黑盒。因此,用户可以影响分配给其函数的资源,但可能不知道他们必须并行化这些资源才能从额外分配的虚拟cpu (vcpu)中获利。在本文中,我们分析了AWS Lambda、Google Cloud Functions (GCF)和Google Cloud Run (GCR)在单个函数和容器实例中并行化的影响。我们主要关注计算密集型工作负载,因为它们从并行化中受益匪浅。此外,我们还研究了在无服务器环境中分配的CPU内核数量与vcpu之间的相关性。我们的结果表明,函数/容器实例的可用内核数量并不总是等于分配的vcpu数量。通过并行化无服务器工作负载,我们发现AWS Lambda节省了81%的成本,GCF节省了49%,GCR节省了69.8%。
Towards Demystifying Intra-Function Parallelism in Serverless Computing
Serverless computing offers a pay-per-use model with high elasticity and automatic scaling for a wide range of applications. Since cloud providers abstract most of the underlying infrastructure, these services work similarly to black-boxes. As a result, users can influence the resources allocated to their functions, but might not be aware that they have to parallelize them to profit from the additionally allocated virtual CPUs (vCPUs). In this paper, we analyze the impact of parallelization within a single function and container instance for AWS Lambda, Google Cloud Functions (GCF), and Google Cloud Run (GCR). We focus on compute-intensive workloads since they benefit greatly from parallelization. Furthermore, we investigate the correlation between the number of allocated CPU cores and vCPUs in serverless environments. Our results show that the number of available cores to a function/container instance does not always equal the number of allocated vCPUs. By parallelizing serverless workloads, we observed cost savings up to 81% for AWS Lambda, 49% for GCF, and 69.8% for GCR.