首页 > 最新文献

Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021最新文献

英文 中文
Is Function-as-a-Service a Good Fit for Latency-Critical Services? 功能即服务是否适合延迟关键型服务?
Haoran Qiu, Saurabh Jha, Subho Sankar Banerjee, Archit Patke, Chen Wang, H. Franke, Z. Kalbarczyk, R. Iyer
Function-as-a-Service (FaaS) is becoming an increasingly popular cloud-deployment paradigm for serverless computing that frees application developers from managing the infrastructure. At the same time, it allows cloud providers to assert control in workload consolidation, i.e., co-locating multiple containers on the same server, thereby achieving higher server utilization, often at the cost of higher end-to-end function request latency. Interestingly, a key aspect of serverless latency management has not been well studied: the trade-off between application developers' latency goals and the FaaS providers' utilization goals. This paper presents a multi-faceted, measurement-driven study of latency variation in serverless platforms that elucidates this trade-off space. We obtained production measurements by executing FaaS benchmarks on IBM Cloud and a private cloud to study the impact of workload consolidation, queuing delay, and cold starts on the end-to-end function request latency. We draw several conclusions from the characterization results. For example, increasing a container's allocated memory limit from 128 MB to 256 MB reduces the tail latency by 2× but has 1.75× higher power consumption and 59% lower CPU utilization.
功能即服务(FaaS)正在成为一种日益流行的无服务器计算的云部署范例,它将应用程序开发人员从管理基础设施中解放出来。同时,它允许云提供商在工作负载整合中维护控制,例如,在同一台服务器上共同定位多个容器,从而实现更高的服务器利用率,通常以更高的端到端功能请求延迟为代价。有趣的是,无服务器延迟管理的一个关键方面还没有得到很好的研究:应用程序开发人员的延迟目标和FaaS提供商的利用率目标之间的权衡。本文对无服务器平台中的延迟变化进行了多方面的、测量驱动的研究,阐明了这种权衡空间。我们通过在IBM Cloud和私有云上执行FaaS基准测试来获得生产度量,以研究工作负载整合、排队延迟和冷启动对端到端功能请求延迟的影响。我们从表征结果中得出几个结论。例如,将容器分配的内存限制从128 MB增加到256 MB,尾部延迟减少了2倍,但功耗增加了1.75倍,CPU利用率降低了59%。
{"title":"Is Function-as-a-Service a Good Fit for Latency-Critical Services?","authors":"Haoran Qiu, Saurabh Jha, Subho Sankar Banerjee, Archit Patke, Chen Wang, H. Franke, Z. Kalbarczyk, R. Iyer","doi":"10.1145/3493651.3493666","DOIUrl":"https://doi.org/10.1145/3493651.3493666","url":null,"abstract":"Function-as-a-Service (FaaS) is becoming an increasingly popular cloud-deployment paradigm for serverless computing that frees application developers from managing the infrastructure. At the same time, it allows cloud providers to assert control in workload consolidation, i.e., co-locating multiple containers on the same server, thereby achieving higher server utilization, often at the cost of higher end-to-end function request latency. Interestingly, a key aspect of serverless latency management has not been well studied: the trade-off between application developers' latency goals and the FaaS providers' utilization goals. This paper presents a multi-faceted, measurement-driven study of latency variation in serverless platforms that elucidates this trade-off space. We obtained production measurements by executing FaaS benchmarks on IBM Cloud and a private cloud to study the impact of workload consolidation, queuing delay, and cold starts on the end-to-end function request latency. We draw several conclusions from the characterization results. For example, increasing a container's allocated memory limit from 128 MB to 256 MB reduces the tail latency by 2× but has 1.75× higher power consumption and 59% lower CPU utilization.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126327328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
SLA for Sequential Serverless Chains: A Machine Learning Approach 顺序无服务器链的SLA:一种机器学习方法
Mohamed Elsakhawy, M. Bauer
Despite its vast potential, a challenge facing serverless computing's wide-scale adoption is the lack of Service Level Agreements (SLAs) for serverless platforms. This challenge is compounded when composition technologies are employed to construct large applications using chains of functions. Due to the dependency of a chain's performance on each function forming it, a single function's sub-optimal performance can result in performance degradations of the entire chain. This paper sheds light on this problem and provides a categorical classification of the factors that impact a serverless function execution performance. We discuss the challenge of serverless chains' SLA and present the results of leveraging FaaS2F, our proposed serverless SLA framework, to define SLAs for fixed-size and variable-size sequential serverless chains. The validation results demonstrate high accuracy in detecting sub-optimal executions exceeding 79%.
尽管无服务器计算具有巨大的潜力,但其大规模采用所面临的挑战是缺乏针对无服务器平台的服务水平协议(sla)。当使用组合技术构建使用函数链的大型应用程序时,这一挑战变得更加复杂。由于链的性能依赖于构成它的每个函数,单个函数的次优性能可能导致整个链的性能下降。本文阐明了这个问题,并对影响无服务器功能执行性能的因素进行了分类。我们讨论了无服务器链SLA的挑战,并展示了利用FaaS2F(我们提出的无服务器SLA框架)为固定大小和可变大小的顺序无服务器链定义SLA的结果。验证结果表明,检测次优执行的准确率超过79%。
{"title":"SLA for Sequential Serverless Chains: A Machine Learning Approach","authors":"Mohamed Elsakhawy, M. Bauer","doi":"10.1145/3493651.3493671","DOIUrl":"https://doi.org/10.1145/3493651.3493671","url":null,"abstract":"Despite its vast potential, a challenge facing serverless computing's wide-scale adoption is the lack of Service Level Agreements (SLAs) for serverless platforms. This challenge is compounded when composition technologies are employed to construct large applications using chains of functions. Due to the dependency of a chain's performance on each function forming it, a single function's sub-optimal performance can result in performance degradations of the entire chain. This paper sheds light on this problem and provides a categorical classification of the factors that impact a serverless function execution performance. We discuss the challenge of serverless chains' SLA and present the results of leveraging FaaS2F, our proposed serverless SLA framework, to define SLAs for fixed-size and variable-size sequential serverless chains. The validation results demonstrate high accuracy in detecting sub-optimal executions exceeding 79%.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122641698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implications of Alternative Serverless Application Control Flow Methods 备选无服务器应用程序控制流方法的含义
Sterling D. Quinn, R. Cordingly, W. Lloyd
Function-as-a-Service or FaaS is a popular delivery model of serverless computing where developers upload code to be executed in the cloud as short running stateless functions. Using smaller functions to decompose processing of larger tasks or workflows introduces the question of how to instrument application control flow to orchestrate an overall task or workflow. In this paper, we examine implications of using different methods to orchestrate the control flow of a serverless data processing pipeline composed as a set of independent FaaS functions. We performed experiments on the AWS Lambda FaaS platform and compared how four different patterns of control flow impact the cost and performance of the pipeline. We investigate control flow using client orchestration, microservice controllers, event-based triggers, and state-machines. Overall, we found that asynchronous methods led to lower orchestration costs, and that event-based orchestration incurred a performance penalty.
功能即服务(FaaS)是一种流行的无服务器计算交付模型,开发人员将代码作为短期运行的无状态函数上传到云中。使用较小的函数来分解较大任务或工作流的处理会引入一个问题,即如何使用应用程序控制流来编排整个任务或工作流。在本文中,我们研究了使用不同方法来编排作为一组独立FaaS功能组成的无服务器数据处理管道的控制流的含义。我们在AWS Lambda FaaS平台上进行了实验,比较了四种不同的控制流模式对管道成本和性能的影响。我们使用客户端编排、微服务控制器、基于事件的触发器和状态机来研究控制流。总的来说,我们发现异步方法可以降低编排成本,而基于事件的编排会导致性能损失。
{"title":"Implications of Alternative Serverless Application Control Flow Methods","authors":"Sterling D. Quinn, R. Cordingly, W. Lloyd","doi":"10.1145/3493651.3493668","DOIUrl":"https://doi.org/10.1145/3493651.3493668","url":null,"abstract":"Function-as-a-Service or FaaS is a popular delivery model of serverless computing where developers upload code to be executed in the cloud as short running stateless functions. Using smaller functions to decompose processing of larger tasks or workflows introduces the question of how to instrument application control flow to orchestrate an overall task or workflow. In this paper, we examine implications of using different methods to orchestrate the control flow of a serverless data processing pipeline composed as a set of independent FaaS functions. We performed experiments on the AWS Lambda FaaS platform and compared how four different patterns of control flow impact the cost and performance of the pipeline. We investigate control flow using client orchestration, microservice controllers, event-based triggers, and state-machines. Overall, we found that asynchronous methods led to lower orchestration costs, and that event-based orchestration incurred a performance penalty.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133134132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Beyond @CloudFunction: Powerful Code Annotations to Capture Serverless Runtime Patterns 超越@CloudFunction:捕获无服务器运行时模式的强大代码注释
Raffael Klingler, N. Trifunovic, Josef Spillner
Simplicity in elastically scalable application development is a key concern addressed by the serverless computing paradigm, in particular the code-level Function-as-a-Service (FaaS). Various FaaSification frameworks demonstrated that marking code methods to streamline their offloading as cloud functions offers a simple bridge to software engineering habits. As application complexity increases, more complex runtime patterns with background activities, such as keeping containerised cloud functions warm to ensure the absence of cold starts, usually require giving up on simplicity and instead investing efforts into orchestrating infrastructure. By bringing infrastructure-as-code concepts into the function source via powerful code annotations, typical orchestration patterns can be simplified again. We evaluate this idea and demonstrate its practical feasibility with FaaS Fusion, an annotations library and transpiler framework for JavaScript.
弹性可伸缩应用程序开发中的简单性是无服务器计算范式(特别是代码级功能即服务(FaaS))所关注的关键问题。各种FaaSification框架表明,标记代码方法以简化其作为云功能的卸载,为软件工程习惯提供了一个简单的桥梁。随着应用程序复杂性的增加,带有后台活动的更复杂的运行时模式(例如保持容器化云功能保持温暖以确保没有冷启动)通常需要放弃简单性,转而投入精力编排基础设施。通过强大的代码注释将基础设施即代码的概念引入函数源,可以再次简化典型的编排模式。我们评估了这个想法,并用FaaS Fusion (JavaScript的注释库和转译器框架)演示了它的实际可行性。
{"title":"Beyond @CloudFunction: Powerful Code Annotations to Capture Serverless Runtime Patterns","authors":"Raffael Klingler, N. Trifunovic, Josef Spillner","doi":"10.1145/3493651.3493669","DOIUrl":"https://doi.org/10.1145/3493651.3493669","url":null,"abstract":"Simplicity in elastically scalable application development is a key concern addressed by the serverless computing paradigm, in particular the code-level Function-as-a-Service (FaaS). Various FaaSification frameworks demonstrated that marking code methods to streamline their offloading as cloud functions offers a simple bridge to software engineering habits. As application complexity increases, more complex runtime patterns with background activities, such as keeping containerised cloud functions warm to ensure the absence of cold starts, usually require giving up on simplicity and instead investing efforts into orchestrating infrastructure. By bringing infrastructure-as-code concepts into the function source via powerful code annotations, typical orchestration patterns can be simplified again. We evaluate this idea and demonstrate its practical feasibility with FaaS Fusion, an annotations library and transpiler framework for JavaScript.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125678642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BIAS Autoscaler: Leveraging Burstable Instances for Cost-Effective Autoscaling on Cloud Systems BIAS Autoscaler:利用突发实例在云系统上实现经济高效的自动缩放
Jaime Dantas, Hamzeh Khazaei, Marin Litoiu
Burstable instances have recently been introduced by cloud providers as a cost-efficient alternative to customers that do not require powerful machines for running their workloads. Unlike conventional instances, the CPU capacity of burstable instances is rate limited, but they can be boosted to their full capacity for small periods when needed. Currently, the majority of cloud providers offer this option as a cheaper solution for their clients. However, little research has been done on the practical usage of these CPU-limited instances. In this paper, we present a novel autoscaling solution that uses burstable instances along with regular instances to handle the queueing arising in traffic and flash crowds. We design BIAS Autoscaler, a state-of-the-art framework that leverages burstable and regular instances for cost-efficient autoscaling and evaluate it on the Google Cloud Platform. We apply our framework to a real-world microservice workload, and conduct extensive experimental evaluations using Google Compute Engines. Experimental results show that BIAS Autoscaler can reduce the overall cost up to 25% and increase resource efficiency by 42% while maintaining the same service quality observed when using conventional instances only.
云提供商最近引入了突发实例,作为不需要强大机器来运行其工作负载的客户的一种经济高效的替代方案。与传统实例不同,突发实例的CPU容量受到速率限制,但在需要时可以在短时间内将它们提升到最大容量。目前,大多数云提供商都将此选项作为一种更便宜的解决方案提供给客户。然而,很少有人研究这些cpu有限的实例的实际使用情况。在本文中,我们提出了一种新的自动伸缩解决方案,该方案使用突发实例和常规实例来处理交通和闪族人群中出现的排队问题。我们设计BIAS Autoscaler,这是一个最先进的框架,利用突发和常规实例进行经济高效的自动缩放,并在谷歌云平台上对其进行评估。我们将我们的框架应用于现实世界的微服务工作负载,并使用Google计算引擎进行了广泛的实验评估。实验结果表明,BIAS Autoscaler可以将总成本降低25%,并将资源效率提高42%,同时保持仅使用常规实例时观察到的相同服务质量。
{"title":"BIAS Autoscaler: Leveraging Burstable Instances for Cost-Effective Autoscaling on Cloud Systems","authors":"Jaime Dantas, Hamzeh Khazaei, Marin Litoiu","doi":"10.1145/3493651.3493667","DOIUrl":"https://doi.org/10.1145/3493651.3493667","url":null,"abstract":"Burstable instances have recently been introduced by cloud providers as a cost-efficient alternative to customers that do not require powerful machines for running their workloads. Unlike conventional instances, the CPU capacity of burstable instances is rate limited, but they can be boosted to their full capacity for small periods when needed. Currently, the majority of cloud providers offer this option as a cheaper solution for their clients. However, little research has been done on the practical usage of these CPU-limited instances. In this paper, we present a novel autoscaling solution that uses burstable instances along with regular instances to handle the queueing arising in traffic and flash crowds. We design BIAS Autoscaler, a state-of-the-art framework that leverages burstable and regular instances for cost-efficient autoscaling and evaluate it on the Google Cloud Platform. We apply our framework to a real-world microservice workload, and conduct extensive experimental evaluations using Google Compute Engines. Experimental results show that BIAS Autoscaler can reduce the overall cost up to 25% and increase resource efficiency by 42% while maintaining the same service quality observed when using conventional instances only.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122541605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SFL: A Compiler for Generating Stateful AWS Lambda Serverless Applications 生成有状态AWS Lambda无服务器应用程序的编译器
Lukas Brand, Markus U. Mock
Over the past couple of years, serverless computing has become a popular way of structuring and deploying applications in the cloud. However, several practical and research challenges remain. In this paper, we provide the first step to address two open issues. We developed a simple extension language (SFL) and a compiler to enable software developers to write entire serverless applications as one piece. The compiler generates necessary orchestration code that automatically binds several functions together. In addition, the SFL tools allow programmers to write stateful serverless functions with the compiler generating supporting cloud infrastructure for the storage and access of the application state. We evaluate our system using simple benchmark programs, comparing the resulting performance to Azure durable functions, which directly supports statefulness. The execution times we see in our unoptimized code are only slightly worse than what we measure on the Azure platform. Overall execution times are considerably better due to better scheduling by AWS Lambda than the Azure durable functions.
在过去的几年中,无服务器计算已经成为在云中构建和部署应用程序的一种流行方式。然而,一些实践和研究方面的挑战仍然存在。在本文中,我们提供了解决两个开放问题的第一步。我们开发了一种简单的扩展语言(SFL)和一个编译器,使软件开发人员能够将整个无服务器应用程序作为一个整体编写。编译器生成必要的编配代码,自动将几个函数绑定在一起。此外,SFL工具允许程序员使用编译器编写有状态的无服务器函数,该编译器生成用于存储和访问应用程序状态的支持云基础设施。我们使用简单的基准程序来评估我们的系统,将结果性能与Azure持久函数进行比较,后者直接支持有状态性。我们在未优化代码中看到的执行时间只比我们在Azure平台上测量的时间稍差。由于AWS Lambda的调度优于Azure持久函数,因此总体执行时间大大缩短。
{"title":"SFL: A Compiler for Generating Stateful AWS Lambda Serverless Applications","authors":"Lukas Brand, Markus U. Mock","doi":"10.1145/3493651.3493670","DOIUrl":"https://doi.org/10.1145/3493651.3493670","url":null,"abstract":"Over the past couple of years, serverless computing has become a popular way of structuring and deploying applications in the cloud. However, several practical and research challenges remain. In this paper, we provide the first step to address two open issues. We developed a simple extension language (SFL) and a compiler to enable software developers to write entire serverless applications as one piece. The compiler generates necessary orchestration code that automatically binds several functions together. In addition, the SFL tools allow programmers to write stateful serverless functions with the compiler generating supporting cloud infrastructure for the storage and access of the application state. We evaluate our system using simple benchmark programs, comparing the resulting performance to Azure durable functions, which directly supports statefulness. The execution times we see in our unoptimized code are only slightly worse than what we measure on the Azure platform. Overall execution times are considerably better due to better scheduling by AWS Lambda than the Azure durable functions.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116445766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Demystifying Intra-Function Parallelism in Serverless Computing 在无服务器计算中揭开函数内并行的神秘面纱
M. Kiener, Mohak Chadha, M. Gerndt
Serverless computing offers a pay-per-use model with high elasticity and automatic scaling for a wide range of applications. Since cloud providers abstract most of the underlying infrastructure, these services work similarly to black-boxes. As a result, users can influence the resources allocated to their functions, but might not be aware that they have to parallelize them to profit from the additionally allocated virtual CPUs (vCPUs). In this paper, we analyze the impact of parallelization within a single function and container instance for AWS Lambda, Google Cloud Functions (GCF), and Google Cloud Run (GCR). We focus on compute-intensive workloads since they benefit greatly from parallelization. Furthermore, we investigate the correlation between the number of allocated CPU cores and vCPUs in serverless environments. Our results show that the number of available cores to a function/container instance does not always equal the number of allocated vCPUs. By parallelizing serverless workloads, we observed cost savings up to 81% for AWS Lambda, 49% for GCF, and 69.8% for GCR.
无服务器计算为广泛的应用程序提供了具有高弹性和自动扩展的按使用付费模型。由于云提供商抽象了大部分底层基础设施,因此这些服务的工作方式类似于黑盒。因此,用户可以影响分配给其函数的资源,但可能不知道他们必须并行化这些资源才能从额外分配的虚拟cpu (vcpu)中获利。在本文中,我们分析了AWS Lambda、Google Cloud Functions (GCF)和Google Cloud Run (GCR)在单个函数和容器实例中并行化的影响。我们主要关注计算密集型工作负载,因为它们从并行化中受益匪浅。此外,我们还研究了在无服务器环境中分配的CPU内核数量与vcpu之间的相关性。我们的结果表明,函数/容器实例的可用内核数量并不总是等于分配的vcpu数量。通过并行化无服务器工作负载,我们发现AWS Lambda节省了81%的成本,GCF节省了49%,GCR节省了69.8%。
{"title":"Towards Demystifying Intra-Function Parallelism in Serverless Computing","authors":"M. Kiener, Mohak Chadha, M. Gerndt","doi":"10.1145/3493651.3493672","DOIUrl":"https://doi.org/10.1145/3493651.3493672","url":null,"abstract":"Serverless computing offers a pay-per-use model with high elasticity and automatic scaling for a wide range of applications. Since cloud providers abstract most of the underlying infrastructure, these services work similarly to black-boxes. As a result, users can influence the resources allocated to their functions, but might not be aware that they have to parallelize them to profit from the additionally allocated virtual CPUs (vCPUs). In this paper, we analyze the impact of parallelization within a single function and container instance for AWS Lambda, Google Cloud Functions (GCF), and Google Cloud Run (GCR). We focus on compute-intensive workloads since they benefit greatly from parallelization. Furthermore, we investigate the correlation between the number of allocated CPU cores and vCPUs in serverless environments. Our results show that the number of available cores to a function/container instance does not always equal the number of allocated vCPUs. By parallelizing serverless workloads, we observed cost savings up to 81% for AWS Lambda, 49% for GCF, and 69.8% for GCR.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117091280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1