首页 > 最新文献

Proceedings of the 2020 Sixth International Workshop on Serverless Computing最新文献

英文 中文
Evaluation of Network File System as a Shared Data Storage in Serverless Computing 无服务器计算中网络文件系统作为共享数据存储的评价
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430096
Jaeghang Choi, Kyungyong Lee
Fully-managed cloud and Function-as-a-Service (FaaS) services allow the wide adoption of serverless computing for various cloud-native applications. Despite the many advantages that serverless computing provides, no direct connection support exists between function run-times, and it is a barrier for data-intensive applications. To overcome this limitation, the leading cloud computing vendor Amazon Web Services (AWS) has started to support mounting the network file system (NFS) across different function run-times. This paper quantitatively evaluates the performance of accessing NFS storage from multiple function run-times and compares the performance with other methods of sharing data among function run-times. Despite the great qualitative benefits of the approach, the limited I/O bandwidth of NFS storage can become a bottleneck, especially when the number of concurrent access from function run-times increases.
完全托管的云和功能即服务(FaaS)服务允许为各种云原生应用程序广泛采用无服务器计算。尽管无服务器计算提供了许多优点,但是在函数运行时之间不存在直接连接支持,这对于数据密集型应用程序来说是一个障碍。为了克服这一限制,领先的云计算供应商Amazon Web Services (AWS)已经开始支持跨不同功能运行时挂载网络文件系统(NFS)。本文定量地评估了从多个函数运行时访问NFS存储的性能,并将其与其他函数运行时数据共享方法的性能进行了比较。尽管这种方法在质量上有很大的好处,但NFS存储有限的I/O带宽可能成为瓶颈,特别是当函数运行时并发访问的数量增加时。
{"title":"Evaluation of Network File System as a Shared Data Storage in Serverless Computing","authors":"Jaeghang Choi, Kyungyong Lee","doi":"10.1145/3429880.3430096","DOIUrl":"https://doi.org/10.1145/3429880.3430096","url":null,"abstract":"Fully-managed cloud and Function-as-a-Service (FaaS) services allow the wide adoption of serverless computing for various cloud-native applications. Despite the many advantages that serverless computing provides, no direct connection support exists between function run-times, and it is a barrier for data-intensive applications. To overcome this limitation, the leading cloud computing vendor Amazon Web Services (AWS) has started to support mounting the network file system (NFS) across different function run-times. This paper quantitatively evaluates the performance of accessing NFS storage from multiple function run-times and compares the performance with other methods of sharing data among function run-times. Despite the great qualitative benefits of the approach, the limited I/O bandwidth of NFS storage can become a bottleneck, especially when the number of concurrent access from function run-times increases.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114902078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Federated Learning using FaaS Fabric 迈向使用FaaS结构的联邦学习
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430100
Mohak Chadha, Anshul Jindal, M. Gerndt
Federated learning (FL) enables resource-constrained edge devices to learn a shared Machine Learning (ML) or Deep Neural Network (DNN) model, while keeping the training data local and providing privacy, security, and economic benefits. However, building a shared model for heterogeneous devices such as resource-constrained edge and cloud makes the efficient management of FL-clients challenging. Furthermore, with the rapid growth of FL-clients, the scaling of FL training process is also difficult. In this paper, we propose a possible solution to these challenges: federated learning over a combination of connected Function-as-a-Service platforms, i.e., FaaS fabric offering a seamless way of extending FL to heterogeneous devices. Towards this, we present FedKeeper, a tool for efficiently managing FL over FaaS fabric. We demonstrate the functionality of FedKeeper by using three FaaS platforms through an image classification task with a varying number of devices/clients, different stochastic optimizers, and local computations (local epochs).
联邦学习(FL)使资源受限的边缘设备能够学习共享的机器学习(ML)或深度神经网络(DNN)模型,同时将训练数据保持在本地,并提供隐私、安全和经济效益。然而,为异构设备(如资源受限的边缘和云)构建共享模型使得高效管理fl客户机具有挑战性。此外,随着外语客户的快速增长,外语培训过程的规模化也变得困难。在本文中,我们提出了一个可能的解决方案来应对这些挑战:通过连接的功能即服务平台的组合进行联邦学习,即FaaS结构提供了一种将FL扩展到异构设备的无缝方式。为此,我们提出了FedKeeper,一个在FaaS结构上有效管理FL的工具。我们通过使用三个FaaS平台来演示FedKeeper的功能,通过不同数量的设备/客户端,不同的随机优化器和局部计算(局部epoch)来完成图像分类任务。
{"title":"Towards Federated Learning using FaaS Fabric","authors":"Mohak Chadha, Anshul Jindal, M. Gerndt","doi":"10.1145/3429880.3430100","DOIUrl":"https://doi.org/10.1145/3429880.3430100","url":null,"abstract":"Federated learning (FL) enables resource-constrained edge devices to learn a shared Machine Learning (ML) or Deep Neural Network (DNN) model, while keeping the training data local and providing privacy, security, and economic benefits. However, building a shared model for heterogeneous devices such as resource-constrained edge and cloud makes the efficient management of FL-clients challenging. Furthermore, with the rapid growth of FL-clients, the scaling of FL training process is also difficult. In this paper, we propose a possible solution to these challenges: federated learning over a combination of connected Function-as-a-Service platforms, i.e., FaaS fabric offering a seamless way of extending FL to heterogeneous devices. Towards this, we present FedKeeper, a tool for efficiently managing FL over FaaS fabric. We demonstrate the functionality of FedKeeper by using three FaaS platforms through an image classification task with a varying number of devices/clients, different stochastic optimizers, and local computations (local epochs).","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123686682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
The Serverless Application Analytics Framework: Enabling Design Trade-off Evaluation for Serverless Software 无服务器应用分析框架:实现无服务器软件的设计权衡评估
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430103
R. Cordingly, Hanfei Yu, Varik Hoang, Zohreh Sadeghi, David Foster, David Perez, Rashad Hatchett, W. Lloyd
To help better understand factors that impact performance on Function-as-a-Service (FaaS) platforms we have developed the Serverless Application Analytics Framework (SAAF). SAAF provides a reusable framework supporting multiple programming languages that developers can integrate into a function's package for deployment to multiple commercial and open source FaaS platforms. SAAF improves the observability of FaaS function deployments by collecting forty-eight distinct metrics to enable developers to profile CPU and memory utilization, monitor infrastructure state, and observe platform scalability. In this paper, we describe SAAF in detail and introduce supporting tools highlighting important features and how to use them. Our client application, FaaS Runner, provides a tool to orchestrate workloads and automate the process of conducting experiments across FaaS platforms. We provide a case study demonstrating the integration of SAAF into an existing open source image processing pipeline built for AWS Lambda. Using FaaS Runner, we automate experiments and acquire metrics from SAAF to profile each function of the pipeline to evaluate performance implications. Finally, we summarize contributions using our tools to evaluate implications of different programming languages for serverless data processing, and to build performance models to predict runtime for serverless workloads.
为了帮助更好地理解影响功能即服务(FaaS)平台性能的因素,我们开发了无服务器应用程序分析框架(SAAF)。SAAF提供了一个支持多种编程语言的可重用框架,开发人员可以将其集成到功能包中,以便部署到多个商业和开源FaaS平台。SAAF通过收集48个不同的指标来改进FaaS功能部署的可观察性,这些指标使开发人员能够分析CPU和内存利用率、监视基础结构状态以及观察平台可伸缩性。在本文中,我们详细描述了SAAF,并介绍了突出重要功能的支持工具以及如何使用它们。我们的客户端应用程序FaaS Runner提供了一个工具来编排工作负载,并自动化跨FaaS平台进行实验的过程。我们提供了一个案例研究,演示如何将SAAF集成到为AWS Lambda构建的现有开源图像处理管道中。使用FaaS Runner,我们将实验自动化,并从SAAF获取指标来描述管道的每个功能,以评估性能影响。最后,我们总结了使用我们的工具的贡献,以评估不同编程语言对无服务器数据处理的影响,并构建性能模型来预测无服务器工作负载的运行时。
{"title":"The Serverless Application Analytics Framework: Enabling Design Trade-off Evaluation for Serverless Software","authors":"R. Cordingly, Hanfei Yu, Varik Hoang, Zohreh Sadeghi, David Foster, David Perez, Rashad Hatchett, W. Lloyd","doi":"10.1145/3429880.3430103","DOIUrl":"https://doi.org/10.1145/3429880.3430103","url":null,"abstract":"To help better understand factors that impact performance on Function-as-a-Service (FaaS) platforms we have developed the Serverless Application Analytics Framework (SAAF). SAAF provides a reusable framework supporting multiple programming languages that developers can integrate into a function's package for deployment to multiple commercial and open source FaaS platforms. SAAF improves the observability of FaaS function deployments by collecting forty-eight distinct metrics to enable developers to profile CPU and memory utilization, monitor infrastructure state, and observe platform scalability. In this paper, we describe SAAF in detail and introduce supporting tools highlighting important features and how to use them. Our client application, FaaS Runner, provides a tool to orchestrate workloads and automate the process of conducting experiments across FaaS platforms. We provide a case study demonstrating the integration of SAAF into an existing open source image processing pipeline built for AWS Lambda. Using FaaS Runner, we automate experiments and acquire metrics from SAAF to profile each function of the pipeline to evaluate performance implications. Finally, we summarize contributions using our tools to evaluate implications of different programming languages for serverless data processing, and to build performance models to predict runtime for serverless workloads.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"15 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114115042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
An Evaluation of Serverless Data Processing Frameworks 无服务器数据处理框架的评估
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430095
Sebastian Werner, Richard Girke, Jörn Kuhlenkamp
Serverless computing is a promising cloud execution model that significantly simplifies cloud users' operational concerns by offering features such as auto-scaling and a pay-as-you-go cost model. Consequently, serverless systems promise to provide an excellent fit for ad-hoc data processing. Unsurprisingly, numerous serverless systems/frameworks for data processing emerged recently from research and industry. However, systems researchers, decision-makers, and data analysts are unaware of how these serverless systems compare to each other. In this paper, we identify existing serverless frameworks for data processing. We present a qualitative assessment of different system architectures and an experiment-driven quantitative comparison, including performance, cost, and usability using the TPC-H benchmark. Our results show that the three publicly available serverless data processing frameworks outperform a comparatively sized Apache Spark cluster in terms of performance and cost for ad-hoc queries on cold data.
无服务器计算是一种很有前途的云执行模型,它通过提供自动扩展和按需付费等功能,极大地简化了云用户的操作问题。因此,无服务器系统承诺提供一个非常适合临时数据处理的系统。不出所料,许多用于数据处理的无服务器系统/框架最近从研究和行业中出现。然而,系统研究人员、决策者和数据分析师并不了解这些无服务器系统之间的比较。在本文中,我们确定了用于数据处理的现有无服务器框架。我们对不同的系统架构进行定性评估,并使用TPC-H基准进行实验驱动的定量比较,包括性能、成本和可用性。我们的结果表明,三个公开可用的无服务器数据处理框架在对冷数据进行临时查询的性能和成本方面优于相对大小的Apache Spark集群。
{"title":"An Evaluation of Serverless Data Processing Frameworks","authors":"Sebastian Werner, Richard Girke, Jörn Kuhlenkamp","doi":"10.1145/3429880.3430095","DOIUrl":"https://doi.org/10.1145/3429880.3430095","url":null,"abstract":"Serverless computing is a promising cloud execution model that significantly simplifies cloud users' operational concerns by offering features such as auto-scaling and a pay-as-you-go cost model. Consequently, serverless systems promise to provide an excellent fit for ad-hoc data processing. Unsurprisingly, numerous serverless systems/frameworks for data processing emerged recently from research and industry. However, systems researchers, decision-makers, and data analysts are unaware of how these serverless systems compare to each other. In this paper, we identify existing serverless frameworks for data processing. We present a qualitative assessment of different system architectures and an experiment-driven quantitative comparison, including performance, cost, and usability using the TPC-H benchmark. Our results show that the three publicly available serverless data processing frameworks outperform a comparatively sized Apache Spark cluster in terms of performance and cost for ad-hoc queries on cold data.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134221031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Implications of Public Cloud Resource Heterogeneity for Inference Serving 公共云资源异构对推理服务的影响
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430093
J. Gunasekaran, Cyan Subhra Mishra, P. Thinakaran, M. Kandemir, C. Das
We are witnessing an increasing trend towards using Machine Learning (ML) based prediction systems, spanning across different application domains, including product recommendation systems, personal assistant devices, facial recognition, etc. These applications typically have diverse requirements in terms of accuracy and response latency, that can be satisfied by a myriad of ML models. However, the deployment cost of prediction serving primarily depends on the type of resources being procured, which by themselves are heterogeneous in terms of provisioning latencies and billing complexity. Thus, it is strenuous for an inference serving system to choose from this confounding array of resource types and model types to provide low-latency and cost-effective inferences. In this work we quantitatively characterize the cost, accuracy and latency implications of hosting ML inferences on different public cloud resource offerings. Our evaluation shows that, prior work does not solve the problem from both dimensions of model and resource heterogeneity. Hence, to holistically address this problem, we need to solve the issues that arise from combining both model and resource heterogeneity towards optimizing for application constraints. Towards this, we discuss the design implications of a self-managed inference serving system, which can optimize for application requirements based on public cloud resource characteristics.
我们正在目睹使用基于机器学习(ML)的预测系统的趋势日益增长,跨越不同的应用领域,包括产品推荐系统,个人助理设备,面部识别等。这些应用程序通常在准确性和响应延迟方面有不同的要求,这些要求可以通过无数的ML模型来满足。然而,预测服务的部署成本主要取决于所采购的资源类型,这些资源本身在供应延迟和计费复杂性方面是异构的。因此,对于推理服务系统来说,从这些混杂的资源类型和模型类型中进行选择以提供低延迟和成本效益的推理是非常困难的。在这项工作中,我们定量地描述了在不同的公共云资源产品上托管ML推论的成本、准确性和延迟影响。我们的评估表明,以往的工作并没有从模型和资源异质性两个维度解决问题。因此,要从整体上解决这个问题,我们需要解决由于将模型和资源异构结合起来以优化应用程序约束而产生的问题。为此,我们讨论了一个自管理推理服务系统的设计含义,该系统可以根据公共云资源特征对应用需求进行优化。
{"title":"Implications of Public Cloud Resource Heterogeneity for Inference Serving","authors":"J. Gunasekaran, Cyan Subhra Mishra, P. Thinakaran, M. Kandemir, C. Das","doi":"10.1145/3429880.3430093","DOIUrl":"https://doi.org/10.1145/3429880.3430093","url":null,"abstract":"We are witnessing an increasing trend towards using Machine Learning (ML) based prediction systems, spanning across different application domains, including product recommendation systems, personal assistant devices, facial recognition, etc. These applications typically have diverse requirements in terms of accuracy and response latency, that can be satisfied by a myriad of ML models. However, the deployment cost of prediction serving primarily depends on the type of resources being procured, which by themselves are heterogeneous in terms of provisioning latencies and billing complexity. Thus, it is strenuous for an inference serving system to choose from this confounding array of resource types and model types to provide low-latency and cost-effective inferences. In this work we quantitatively characterize the cost, accuracy and latency implications of hosting ML inferences on different public cloud resource offerings. Our evaluation shows that, prior work does not solve the problem from both dimensions of model and resource heterogeneity. Hence, to holistically address this problem, we need to solve the issues that arise from combining both model and resource heterogeneity towards optimizing for application constraints. Towards this, we discuss the design implications of a self-managed inference serving system, which can optimize for application requirements based on public cloud resource characteristics.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121690225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Resource Management for Cloud Functions with Memory Tracing, Profiling and Autotuning 具有内存跟踪、分析和自动调优的云功能的资源管理
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430094
Josef Spillner
Application software provisioning evolved from monolithic designs towards differently designed abstractions including serverless applications. The promise of that abstraction is that developers are free from infrastructural concerns such as instance activation and autoscaling. Today's serverless architectures based on FaaS are however still exposing developers to explicit low-level decisions about the amount of memory to allocate for the respective cloud functions. In many cases, guesswork and ad-hoc decisions determine the values a developer will put into the configuration. We contribute tools to measure the memory consumption of a function in various Docker, OpenFaaS and GCF/GCR configurations over time and to create trace profiles that advanced FaaS engines can use to autotune memory dynamically. Moreover, we explain how pricing forecasts can be performed by connecting these traces with a FaaS characteristics knowledge base.
应用软件供应从单片设计发展到不同设计的抽象,包括无服务器应用程序。这种抽象的承诺是,开发人员不必担心基础设施,比如实例激活和自动伸缩。然而,今天基于FaaS的无服务器架构仍然让开发人员在为各自的云功能分配内存量时需要做出明确的低级决策。在许多情况下,猜测和临时决策决定了开发人员将在配置中放入的值。我们提供了一些工具来测量不同Docker、OpenFaaS和GCF/GCR配置下函数的内存消耗,并创建跟踪配置文件,高级FaaS引擎可以使用这些配置文件来动态地自动调整内存。此外,我们解释了如何通过将这些轨迹与FaaS特征知识库连接起来来执行定价预测。
{"title":"Resource Management for Cloud Functions with Memory Tracing, Profiling and Autotuning","authors":"Josef Spillner","doi":"10.1145/3429880.3430094","DOIUrl":"https://doi.org/10.1145/3429880.3430094","url":null,"abstract":"Application software provisioning evolved from monolithic designs towards differently designed abstractions including serverless applications. The promise of that abstraction is that developers are free from infrastructural concerns such as instance activation and autoscaling. Today's serverless architectures based on FaaS are however still exposing developers to explicit low-level decisions about the amount of memory to allocate for the respective cloud functions. In many cases, guesswork and ad-hoc decisions determine the values a developer will put into the configuration. We contribute tools to measure the memory consumption of a function in various Docker, OpenFaaS and GCF/GCR configurations over time and to create trace profiles that advanced FaaS engines can use to autotune memory dynamically. Moreover, we explain how pricing forecasts can be performed by connecting these traces with a FaaS characteristics knowledge base.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Bringing scaling transparency to Proteomics applications with serverless computing 通过无服务器计算为蛋白质组学应用程序带来缩放透明度
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430101
M. Mirabelli, P. López, G. Vernik
Scaling transparency means that applications can expand in scale without changes to the system structure or the application algorithms. Serverless Computing's inherent auto-scaling support and fast function launching is ideally suited to support scaling transparency in different domains. In particular, Proteomic applications could considerably benefit from scaling transparency and serverless technologies due to their high concurrency requirements. Therefore, the auto-provisioning nature of serverless platforms makes this computing model an alternative to satisfy dynamically the resources required by protein folding simulation processes. However, the transition to these architectures must face challenges: they should show comparable performance and cost to code running in Virtual Machines (VMs). In this article, we demonstrate that Proteomics applications implemented with the Replica Exchange algorithm can be moved to serverless settings guaranteeing scaling transparency. We also validate that we can reduce the total execution time by around forty percent with comparable cost to cluster technologies (Work Queue) over VMs.
可伸缩透明性意味着应用程序可以在不改变系统结构或应用程序算法的情况下进行扩展。无服务器计算固有的自动扩展支持和快速功能启动非常适合支持不同领域的扩展透明度。特别是,蛋白质组学应用程序由于其高并发性要求,可以从可伸缩性透明度和无服务器技术中获益。因此,无服务器平台的自动供应特性使该计算模型成为动态满足蛋白质折叠仿真过程所需资源的替代方案。然而,向这些体系结构的过渡必须面临挑战:它们应该显示出与在虚拟机(vm)中运行的代码相当的性能和成本。在本文中,我们将演示使用Replica Exchange算法实现的Proteomics应用程序可以移动到无服务器设置中,以保证扩展透明性。我们还验证了我们可以将总执行时间减少大约40%,而成本与vm上的集群技术(工作队列)相当。
{"title":"Bringing scaling transparency to Proteomics applications with serverless computing","authors":"M. Mirabelli, P. López, G. Vernik","doi":"10.1145/3429880.3430101","DOIUrl":"https://doi.org/10.1145/3429880.3430101","url":null,"abstract":"Scaling transparency means that applications can expand in scale without changes to the system structure or the application algorithms. Serverless Computing's inherent auto-scaling support and fast function launching is ideally suited to support scaling transparency in different domains. In particular, Proteomic applications could considerably benefit from scaling transparency and serverless technologies due to their high concurrency requirements. Therefore, the auto-provisioning nature of serverless platforms makes this computing model an alternative to satisfy dynamically the resources required by protein folding simulation processes. However, the transition to these architectures must face challenges: they should show comparable performance and cost to code running in Virtual Machines (VMs). In this article, we demonstrate that Proteomics applications implemented with the Replica Exchange algorithm can be moved to serverless settings guaranteeing scaling transparency. We also validate that we can reduce the total execution time by around forty percent with comparable cost to cluster technologies (Work Queue) over VMs.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116356563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Temporal Performance Modelling of Serverless Computing Platforms 无服务器计算平台的时间性能建模
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430092
Nima Mahmoudi, Hamzeh Khazaei
Analytical performance models have been shown very efficient in analyzing, predicting, and improving the performance of distributed computing systems. However, there is a lack of rigorous analytical models for analyzing the transient behaviour of serverless computing platforms, which is expected to be the dominant computing paradigm in cloud computing. Also, due to its unique characteristics and policies, performance models developed for other systems cannot be directly applied to modelling these systems. In this work, we propose an analytical performance model that is capable of predicting several key performance metrics for serverless workloads using only their average response time for warm and cold requests. The introduced model uses realistic assumptions, which makes it suitable for online analysis of real-world platforms. We validate the proposed model through extensive experimentation on AWS Lambda. Although we focus primarily on AWS Lambda due to its wide adoption in our experimentation, the proposed model can be leveraged for other public serverless computing platforms with similar auto-scaling policies, e.g., Google Cloud Functions, IBM Cloud Functions, and Azure Functions.
分析性能模型在分析、预测和改进分布式计算系统的性能方面已经被证明是非常有效的。然而,缺乏严格的分析模型来分析无服务器计算平台的瞬态行为,这有望成为云计算中的主要计算范式。此外,由于其独特的特性和策略,为其他系统开发的性能模型不能直接应用于这些系统的建模。在这项工作中,我们提出了一个分析性能模型,该模型能够仅使用热请求和冷请求的平均响应时间来预测无服务器工作负载的几个关键性能指标。引入的模型使用了现实的假设,这使得它适合于对现实世界平台的在线分析。我们通过在AWS Lambda上进行大量实验来验证所提出的模型。虽然我们主要关注AWS Lambda,因为它在我们的实验中被广泛采用,但所提出的模型可以用于其他具有类似自动扩展策略的公共无服务器计算平台,例如Google Cloud Functions、IBM Cloud Functions和Azure Functions。
{"title":"Temporal Performance Modelling of Serverless Computing Platforms","authors":"Nima Mahmoudi, Hamzeh Khazaei","doi":"10.1145/3429880.3430092","DOIUrl":"https://doi.org/10.1145/3429880.3430092","url":null,"abstract":"Analytical performance models have been shown very efficient in analyzing, predicting, and improving the performance of distributed computing systems. However, there is a lack of rigorous analytical models for analyzing the transient behaviour of serverless computing platforms, which is expected to be the dominant computing paradigm in cloud computing. Also, due to its unique characteristics and policies, performance models developed for other systems cannot be directly applied to modelling these systems. In this work, we propose an analytical performance model that is capable of predicting several key performance metrics for serverless workloads using only their average response time for warm and cold requests. The introduced model uses realistic assumptions, which makes it suitable for online analysis of real-world platforms. We validate the proposed model through extensive experimentation on AWS Lambda. Although we focus primarily on AWS Lambda due to its wide adoption in our experimentation, the proposed model can be leveraged for other public serverless computing platforms with similar auto-scaling policies, e.g., Google Cloud Functions, IBM Cloud Functions, and Azure Functions.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121587727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
ACE 王牌
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430098
Anthony Byrne, S. Nadgowda, A. Coskun
While much of the software running on today's serverless platforms is written in easily-analyzed high-level interpreted languages, many performance-conscious users choose to deploy their applications as container-encapsulated compiled binaries on serverless container platforms such as AWS Fargate or Google Cloud Run. Modern CI/CD workflows make this deployment process nearly-instantaneous, leaving little time for in-depth manual application security reviews. This combination of opaque binaries and rapid deployment prevents cloud developers and platform operators from knowing if their applications contain outdated, vulnerable, or legally-compromised code. This paper proposes Approximate Concrete Execution (ACE), a just-in-time binary analysis technique that enables automatic software component discovery for serverless binaries. Through classification and search engine experiments with common cloud software packages, we find that ACE scans binaries 5.2x faster than a state-of-the-art binary analysis tool, minimizing the impact on deployment and cold-start latency while maintaining comparable recall.
{"title":"ACE","authors":"Anthony Byrne, S. Nadgowda, A. Coskun","doi":"10.1145/3429880.3430098","DOIUrl":"https://doi.org/10.1145/3429880.3430098","url":null,"abstract":"While much of the software running on today's serverless platforms is written in easily-analyzed high-level interpreted languages, many performance-conscious users choose to deploy their applications as container-encapsulated compiled binaries on serverless container platforms such as AWS Fargate or Google Cloud Run. Modern CI/CD workflows make this deployment process nearly-instantaneous, leaving little time for in-depth manual application security reviews. This combination of opaque binaries and rapid deployment prevents cloud developers and platform operators from knowing if their applications contain outdated, vulnerable, or legally-compromised code. This paper proposes Approximate Concrete Execution (ACE), a just-in-time binary analysis technique that enables automatic software component discovery for serverless binaries. Through classification and search engine experiments with common cloud software packages, we find that ACE scans binaries 5.2x faster than a state-of-the-art binary analysis tool, minimizing the impact on deployment and cold-start latency while maintaining comparable recall.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114814231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Active-Standby for High-Availability in FaaS FaaS中的主备高可用性
Pub Date : 2020-12-07 DOI: 10.1145/3429880.3430097
Yasmina Bouizem, N. Parlavantzas, Djawida Dib, C. Morin
Serverless computing is becoming more and more attractive for cloud solution architects and developers. This new computing paradigm relies on Function-as-a-Service (FaaS) platforms that enable deploying functions without being concerned with the underlying infrastructure. An important challenge in designing FaaS platforms is ensuring the availability of deployed functions. Existing FaaS platforms address this challenge principally through retrying function executions. In this paper, we propose and implement an alternative fault-tolerance approach based on active-standby failover. Results from an experimental evaluation show that our approach increases availability and performance compared to the retry-based approach.
对于云解决方案架构师和开发人员来说,无服务器计算正变得越来越有吸引力。这种新的计算范式依赖于功能即服务(FaaS)平台,该平台支持部署功能,而无需考虑底层基础设施。设计FaaS平台的一个重要挑战是确保部署功能的可用性。现有的FaaS平台主要通过重试函数执行来解决这一挑战。在本文中,我们提出并实现了一种基于主备故障转移的容错方法。实验评估结果表明,与基于重试的方法相比,我们的方法提高了可用性和性能。
{"title":"Active-Standby for High-Availability in FaaS","authors":"Yasmina Bouizem, N. Parlavantzas, Djawida Dib, C. Morin","doi":"10.1145/3429880.3430097","DOIUrl":"https://doi.org/10.1145/3429880.3430097","url":null,"abstract":"Serverless computing is becoming more and more attractive for cloud solution architects and developers. This new computing paradigm relies on Function-as-a-Service (FaaS) platforms that enable deploying functions without being concerned with the underlying infrastructure. An important challenge in designing FaaS platforms is ensuring the availability of deployed functions. Existing FaaS platforms address this challenge principally through retrying function executions. In this paper, we propose and implement an alternative fault-tolerance approach based on active-standby failover. Results from an experimental evaluation show that our approach increases availability and performance compared to the retry-based approach.","PeriodicalId":224350,"journal":{"name":"Proceedings of the 2020 Sixth International Workshop on Serverless Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127743331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of the 2020 Sixth International Workshop on Serverless Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1