首页 > 最新文献

Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing最新文献

英文 中文
DMFE
Victor Millnert, Magnus Templing, Patrik Åberg
In this paper we present DMFE (did my function execute?), which is a concept capable of learning and recognizing functional-level events, states, and loads from low-level execution-data. DMFE-functions are not necessarily software functions, as in "my_fun( )", but general functions in the etymological sense of the word, such as "someone pushed code to git", or "player activity is high". This allows DMFE to act as a general multi-purpose sensor which can be applied across a variety of software components-to be used for software monitoring, debugging, or testing-all without requiring the need for a deep understanding of the source code. Since the truth is always in the code, the main idea behind DMFE is to have the code itself "paint" execution-data on a "canvas" during run-time, and then let a deep neural network detect patterns which it associates with these functions and behaviors. We have successfully applied DMFE on internal production-code, and to illustrate how this is done we have also applied it on the two open-source projects: i) the distributed version-control system Git and ii) a text-based multi-user dungeon game Mud.
{"title":"DMFE","authors":"Victor Millnert, Magnus Templing, Patrik Åberg","doi":"10.1145/3468737.3494086","DOIUrl":"https://doi.org/10.1145/3468737.3494086","url":null,"abstract":"In this paper we present DMFE (did my function execute?), which is a concept capable of learning and recognizing functional-level events, states, and loads from low-level execution-data. DMFE-functions are not necessarily software functions, as in \"my_fun( )\", but general functions in the etymological sense of the word, such as \"someone pushed code to git\", or \"player activity is high\". This allows DMFE to act as a general multi-purpose sensor which can be applied across a variety of software components-to be used for software monitoring, debugging, or testing-all without requiring the need for a deep understanding of the source code. Since the truth is always in the code, the main idea behind DMFE is to have the code itself \"paint\" execution-data on a \"canvas\" during run-time, and then let a deep neural network detect patterns which it associates with these functions and behaviors. We have successfully applied DMFE on internal production-code, and to illustrate how this is done we have also applied it on the two open-source projects: i) the distributed version-control system Git and ii) a text-based multi-user dungeon game Mud.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128052621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
QoS-aware 5G component selection for content delivery in multi-access edge computing 面向多接入边缘计算内容交付的qos感知5G组件选择
E. Maleki, Weibin Ma, Lena Mashayekhy, Humberto J. La Roche
The demand for content such as multimedia services with stringent latency requirements has proliferated significantly, posing heavy backhaul congestion in mobile networks. The integration of Multi-access Edge Computing (MEC) and 5G network is an emerging solution that alleviates the backhaul congestion to meet QoS requirements such as ultra-low latency, ultra-high reliability, and continuous connectivity to support various latency-critical applications for user equipment (UE). Content caching can markedly augment QoS for UEs by increasing the availability of popular content. However, uncertainties originating from user mobility cause the most challenging barrier in deciding content routes for UEs that lead to minimum latency. Considering the 5G-enabled MEC components, it is critical to select the optimal 5G components, representing content routes from Edge Application Servers (EASs) to UEs, that enhances QoS for the UEs with uncertain mobility patterns by reducing frequent handover (path reallocation). To this aim, we study the component selection for QoS-aware content delivery in 5G-enabled MEC. We first formulate an integer programming (IP) optimization model to obtain the optimal content routing decisions. As this problem is NP-hard, we tackle its intractability by designing an efficient online learning approach, called Q-CSCD, to achieve a bounded performance. Q-CSCD learns the optimal component selection for UEs and autonomously makes decisions to minimize latency for content delivery. We conduct extensive experiments based on a real-world dataset to validate the effectiveness of our proposed algorithm. The results reveal that Q-CSCD leads to low latency and handover ratio in a reasonable time with a reduced regret over time.
对具有严格延迟要求的多媒体服务等内容的需求大幅增加,导致流动网络回程拥塞严重。MEC (Multi-access Edge Computing)与5G网络的融合是一种新兴的解决方案,可以缓解回程拥塞,满足超低延迟、超高可靠性和持续连接等QoS要求,支持各种用户设备(UE)的延迟关键型应用。内容缓存可以通过增加流行内容的可用性来显著增强ue的QoS。然而,来自用户移动性的不确定性在为ue决定导致最小延迟的内容路由方面造成了最具挑战性的障碍。考虑到支持5G的MEC组件,选择最优的5G组件至关重要,这些组件代表从边缘应用服务器(EASs)到终端的内容路由,通过减少频繁的切换(路径重新分配)来增强具有不确定移动模式的终端的QoS。为此,我们研究了支持5g的MEC中qos感知内容交付的组件选择。我们首先建立了一个整数规划(IP)优化模型,以获得最优的内容路由决策。由于这个问题是np困难的,我们通过设计一种称为Q-CSCD的有效在线学习方法来解决它的顽固性,以实现有界性能。Q-CSCD学习ue的最佳组件选择,并自主做出决策,以最大限度地减少内容交付的延迟。我们在真实世界的数据集上进行了大量的实验,以验证我们提出的算法的有效性。结果表明,Q-CSCD可以在合理的时间内降低延迟和切换率,并随着时间的推移减少后悔。
{"title":"QoS-aware 5G component selection for content delivery in multi-access edge computing","authors":"E. Maleki, Weibin Ma, Lena Mashayekhy, Humberto J. La Roche","doi":"10.1145/3468737.3494101","DOIUrl":"https://doi.org/10.1145/3468737.3494101","url":null,"abstract":"The demand for content such as multimedia services with stringent latency requirements has proliferated significantly, posing heavy backhaul congestion in mobile networks. The integration of Multi-access Edge Computing (MEC) and 5G network is an emerging solution that alleviates the backhaul congestion to meet QoS requirements such as ultra-low latency, ultra-high reliability, and continuous connectivity to support various latency-critical applications for user equipment (UE). Content caching can markedly augment QoS for UEs by increasing the availability of popular content. However, uncertainties originating from user mobility cause the most challenging barrier in deciding content routes for UEs that lead to minimum latency. Considering the 5G-enabled MEC components, it is critical to select the optimal 5G components, representing content routes from Edge Application Servers (EASs) to UEs, that enhances QoS for the UEs with uncertain mobility patterns by reducing frequent handover (path reallocation). To this aim, we study the component selection for QoS-aware content delivery in 5G-enabled MEC. We first formulate an integer programming (IP) optimization model to obtain the optimal content routing decisions. As this problem is NP-hard, we tackle its intractability by designing an efficient online learning approach, called Q-CSCD, to achieve a bounded performance. Q-CSCD learns the optimal component selection for UEs and autonomously makes decisions to minimize latency for content delivery. We conduct extensive experiments based on a real-world dataset to validate the effectiveness of our proposed algorithm. The results reveal that Q-CSCD leads to low latency and handover ratio in a reasonable time with a reduced regret over time.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121056201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Game-theoretic modeling of DDoS attacks in cloud computing 云计算中DDoS攻击的博弈论建模
Kaho Wan, Joel Coffman
The benefits of cloud computing have attracted many organizations to migrate their IT infrastructures into the cloud. In an infrastructure as a service (IaaS) model, the cloud service provider offers services to multiple consumers using shared physical hardware resources. However, by sharing a cloud environment with other consumers, organizations may also share security risks with their cotenants. Distributed denial of service (DDoS) attacks are considered one of the major security threats in cloud computing. Without a proper defense mechanism, an attack against one tenant can also affect the availability of cotenants. This work uses a game-theoretic approach to analyze the interactions between various entities when the cloud is under attack. The resulting Nash equilibrium shows that collateral damage to cotenants is unlikely if the cloud service provider is unbiased and chooses a rational strategy, but the Nash equilibrium can change when the cloud service provider does not treat cloud consumers equally. The cloud service provider's bias can influence its strategy selection and create a situation where untargeted users suffer unnecessary collateral damage from DDoS attacks.
云计算的好处吸引了许多组织将其IT基础设施迁移到云中。在基础设施即服务(IaaS)模型中,云服务提供商使用共享的物理硬件资源向多个消费者提供服务。然而,通过与其他消费者共享云环境,组织也可能与其合作者共享安全风险。分布式拒绝服务(DDoS)攻击被认为是云计算中的主要安全威胁之一。如果没有适当的防御机制,对一个租户的攻击也会影响租户的可用性。这项工作使用博弈论的方法来分析云受到攻击时各种实体之间的相互作用。由此产生的纳什均衡表明,如果云服务提供商没有偏见并选择了理性的策略,则不太可能对协约造成附带损害,但当云服务提供商不平等对待云消费者时,纳什均衡可能会发生变化。云服务提供商的偏见会影响其战略选择,并造成非目标用户遭受DDoS攻击的不必要附带损害的情况。
{"title":"Game-theoretic modeling of DDoS attacks in cloud computing","authors":"Kaho Wan, Joel Coffman","doi":"10.1145/3468737.3494093","DOIUrl":"https://doi.org/10.1145/3468737.3494093","url":null,"abstract":"The benefits of cloud computing have attracted many organizations to migrate their IT infrastructures into the cloud. In an infrastructure as a service (IaaS) model, the cloud service provider offers services to multiple consumers using shared physical hardware resources. However, by sharing a cloud environment with other consumers, organizations may also share security risks with their cotenants. Distributed denial of service (DDoS) attacks are considered one of the major security threats in cloud computing. Without a proper defense mechanism, an attack against one tenant can also affect the availability of cotenants. This work uses a game-theoretic approach to analyze the interactions between various entities when the cloud is under attack. The resulting Nash equilibrium shows that collateral damage to cotenants is unlikely if the cloud service provider is unbiased and chooses a rational strategy, but the Nash equilibrium can change when the cloud service provider does not treat cloud consumers equally. The cloud service provider's bias can influence its strategy selection and create a situation where untargeted users suffer unnecessary collateral damage from DDoS attacks.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121246289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Concentrated isolation for container networks toward application-aware sandbox tailoring 面向应用程序感知沙箱裁剪的容器网络集中隔离
Yuki Nakata, Katsuya Matsubara, Ryosuke Matsumoto
Containers provide a lightweight and fine-grained isolation for computational resources such as CPUs, memory, storage, and networks, but their weak isolation raises security concerns. As a result, research and development efforts have focused on redesigning truly sandboxed containers with system call intercept and hardware virtualization techniques such as gVisor and Kata Containers. However, such fully integrated sandboxing could overwhelm the lightweight and scalable nature of the containers. In this work, we propose a partially fortified sandboxing mechanism that concentratedly fortifies the network isolation, focusing on the fact that containerized clouds and the applications running on them require different isolation levels in accordance with their unique characteristics. We describe how to efficiently implement the mechanism to fortify network isolation for containers with a para-passthrough hypervisor and report evaluation results with benchmarks and real applications. Our findings demonstrate that this fortified network isolation has good potential to tailor sandboxes for containerized PaaS/FaaS clouds.
容器为cpu、内存、存储和网络等计算资源提供了轻量级和细粒度的隔离,但是它们的弱隔离引起了安全问题。因此,研究和开发工作的重点是重新设计真正的沙箱容器,使用系统调用拦截和硬件虚拟化技术(如gVisor和Kata containers)。然而,这种完全集成的沙箱可能会压倒容器的轻量级和可伸缩特性。在这项工作中,我们提出了一种部分强化的沙盒机制,集中加强网络隔离,重点关注容器化云和在其上运行的应用程序根据其独特的特性需要不同的隔离级别。我们描述了如何有效地实现该机制,以加强具有准直通管理程序的容器的网络隔离,并使用基准测试和实际应用程序报告评估结果。我们的研究结果表明,这种强化的网络隔离具有为容器化PaaS/FaaS云定制沙箱的良好潜力。
{"title":"Concentrated isolation for container networks toward application-aware sandbox tailoring","authors":"Yuki Nakata, Katsuya Matsubara, Ryosuke Matsumoto","doi":"10.1145/3468737.3494092","DOIUrl":"https://doi.org/10.1145/3468737.3494092","url":null,"abstract":"Containers provide a lightweight and fine-grained isolation for computational resources such as CPUs, memory, storage, and networks, but their weak isolation raises security concerns. As a result, research and development efforts have focused on redesigning truly sandboxed containers with system call intercept and hardware virtualization techniques such as gVisor and Kata Containers. However, such fully integrated sandboxing could overwhelm the lightweight and scalable nature of the containers. In this work, we propose a partially fortified sandboxing mechanism that concentratedly fortifies the network isolation, focusing on the fact that containerized clouds and the applications running on them require different isolation levels in accordance with their unique characteristics. We describe how to efficiently implement the mechanism to fortify network isolation for containers with a para-passthrough hypervisor and report evaluation results with benchmarks and real applications. Our findings demonstrate that this fortified network isolation has good potential to tailor sandboxes for containerized PaaS/FaaS clouds.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127714935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the cost and performance benefits of AWS step functions using a data processing pipeline 探索使用数据处理管道的AWS步进函数的成本和性能优势
Anil Mathew, V. Andrikopoulos, F. Blaauw
In traditional cloud computing, dedicated hardware is substituted by dynamically allocated, utility-oriented resources such as virtualized servers. While cloud services are following the pay-as-you-go pricing model, resources are billed based on instance allocation and not on the actual usage, leading the customers to be charged needlessly. In serverless computing, as exemplified by the Function-as-a-Service (FaaS) model where functions are the basic resources, functions are typically not allocated or charged until invoked or triggered. Functions are not applications, however, and to build compelling serverless applications they frequently need to be orchestrated with some kind of application logic. A major issue emerging by the use of orchestration is that it complicates further the already complex billing model used by FaaS providers, which in combination with the lack of granular billing and execution details offered by the providers makes the development and evaluation of serverless applications challenging. Towards shedding some light into this matter, in this work we extensively evaluate the state-of-the-art function orchestrator AWS Step Functions (ASF) with respect to its performance and cost. For this purpose we conduct a series of experiments using a serverless data processing pipeline application developed as both ASF Standard and Express workflows. Our results show that Step Functions using Express workflows are economical when running short-lived tasks with many state transitions. In contrast, Standard workflows are better suited for long-running tasks, offering in addition detailed debugging and logging information. However, even if the behavior of the orchestrated AWS Lambda functions influences both types of workflows, Step Functions realized as Express workflows get impacted the most by the phenomena affecting Lambda functions.
在传统的云计算中,专用硬件被动态分配的、面向实用的资源(如虚拟化服务器)所取代。虽然云服务遵循随用随付的定价模式,但资源是根据实例分配而不是实际使用情况计费的,这导致客户不必要地收取费用。在无服务器计算中,如功能即服务(FaaS)模型所示,其中功能是基本资源,在调用或触发之前通常不会分配或收费功能。但是,函数不是应用程序,为了构建引人注目的无服务器应用程序,它们经常需要与某种应用程序逻辑进行编排。使用编排出现的一个主要问题是,它使FaaS提供商使用的已经很复杂的计费模型进一步复杂化,再加上提供商提供的细粒度计费和执行细节的缺乏,使得无服务器应用程序的开发和评估变得具有挑战性。为了对这个问题有所了解,在这项工作中,我们从性能和成本方面广泛评估了最先进的功能编排器AWS Step Functions (ASF)。为此,我们使用无服务器数据处理管道应用程序进行了一系列实验,该应用程序开发为ASF标准和Express工作流。我们的结果表明,当运行具有许多状态转换的短期任务时,使用Express工作流的步进函数是经济的。相比之下,标准工作流更适合长时间运行的任务,它还提供了详细的调试和日志信息。然而,即使编排好的AWS Lambda函数的行为影响了这两种类型的工作流,作为Express工作流实现的步进函数受到影响Lambda函数现象的影响最大。
{"title":"Exploring the cost and performance benefits of AWS step functions using a data processing pipeline","authors":"Anil Mathew, V. Andrikopoulos, F. Blaauw","doi":"10.1145/3468737.3494084","DOIUrl":"https://doi.org/10.1145/3468737.3494084","url":null,"abstract":"In traditional cloud computing, dedicated hardware is substituted by dynamically allocated, utility-oriented resources such as virtualized servers. While cloud services are following the pay-as-you-go pricing model, resources are billed based on instance allocation and not on the actual usage, leading the customers to be charged needlessly. In serverless computing, as exemplified by the Function-as-a-Service (FaaS) model where functions are the basic resources, functions are typically not allocated or charged until invoked or triggered. Functions are not applications, however, and to build compelling serverless applications they frequently need to be orchestrated with some kind of application logic. A major issue emerging by the use of orchestration is that it complicates further the already complex billing model used by FaaS providers, which in combination with the lack of granular billing and execution details offered by the providers makes the development and evaluation of serverless applications challenging. Towards shedding some light into this matter, in this work we extensively evaluate the state-of-the-art function orchestrator AWS Step Functions (ASF) with respect to its performance and cost. For this purpose we conduct a series of experiments using a serverless data processing pipeline application developed as both ASF Standard and Express workflows. Our results show that Step Functions using Express workflows are economical when running short-lived tasks with many state transitions. In contrast, Standard workflows are better suited for long-running tasks, offering in addition detailed debugging and logging information. However, even if the behavior of the orchestrated AWS Lambda functions influences both types of workflows, Step Functions realized as Express workflows get impacted the most by the phenomena affecting Lambda functions.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126854701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
FlyNet
Eric J. Lyons, Hakan Saplakoglu, M. Zink, Komal Thareja, A. Mandal, Chengyi Qu, Songjie Wang, P. Calyam, G. Papadimitriou, Ryan Tanaka, E. Deelman
Many Internet of Things (IoT) applications require compute resources that cannot be provided by the devices themselves. At the same time, processing of the data generated by IoT devices often has to be performed in real- or near real-time. Examples of such scenarios are autonomous vehicles in the form of cars and drones where the processing of observational data (e.g., video feeds) needs to be performed expeditiously to allow for safe operation. To support the computational needs and timeliness requirements of such applications it is essential to include suitable edge resources to execute these applications. In this paper, we present our FlyNet architecture which has the goal to provide a new platform to support workflows that include applications executing at the network edge, at the computing core, and leverage deeply programmable networks. We discuss the challenges associated with provisioning such networking and compute infrastructure on demand, tailored to IoT application workflows. We describe a strategy to leverage the end-to-end integrated infrastructure that covers all points in the spectrum of response latency for application processing. We present our prototype implementation of the architecture and evaluate its performance for the case of drone video analytics workflows with varying computational requirements.
{"title":"FlyNet","authors":"Eric J. Lyons, Hakan Saplakoglu, M. Zink, Komal Thareja, A. Mandal, Chengyi Qu, Songjie Wang, P. Calyam, G. Papadimitriou, Ryan Tanaka, E. Deelman","doi":"10.1145/3468737.3494098","DOIUrl":"https://doi.org/10.1145/3468737.3494098","url":null,"abstract":"Many Internet of Things (IoT) applications require compute resources that cannot be provided by the devices themselves. At the same time, processing of the data generated by IoT devices often has to be performed in real- or near real-time. Examples of such scenarios are autonomous vehicles in the form of cars and drones where the processing of observational data (e.g., video feeds) needs to be performed expeditiously to allow for safe operation. To support the computational needs and timeliness requirements of such applications it is essential to include suitable edge resources to execute these applications. In this paper, we present our FlyNet architecture which has the goal to provide a new platform to support workflows that include applications executing at the network edge, at the computing core, and leverage deeply programmable networks. We discuss the challenges associated with provisioning such networking and compute infrastructure on demand, tailored to IoT application workflows. We describe a strategy to leverage the end-to-end integrated infrastructure that covers all points in the spectrum of response latency for application processing. We present our prototype implementation of the architecture and evaluate its performance for the case of drone video analytics workflows with varying computational requirements.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125282905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Courier: delivering serverless functions within heterogeneous FaaS deployments Courier:在异构FaaS部署中交付无服务器功能
Anshul Jindal, Julian Frielinghaus, Mohak Chadha, M. Gerndt
With the advent of serverless computing in different domains, there is a growing need for dynamic adaption to handle diverse and heterogeneous functions. However, serverless computing is currently limited to homogeneous Function-as-a-Service (FaaS) deployments or simply FaaS Deployment (FaaSD) consisting of deployments of serverless functions using a FaaS platform in a region with certain memory configurations. Extending serverless computing to support Heterogeneous FaaS Deployments (HeteroFaaSDs) consisting of multiple FaaSDs with variable configurations (FaaS platform, region, and memory) and dynamically load balancing the invocations of the functions across these FaaSDs within a HeteroFaaSD can provide an optimal way for handling such serverless functions. In this paper, we present a software system called Courier that is responsible for optimally distributing the invocations of the functions (called delivering of serverless functions) within the HeteroFaaSDs based on the execution time of the functions on the FaaSDs comprising the HeteroFaaSDs. To this end, we developed two approaches: Auto Weighted Round-Robin (AWRR) and PerFunction Auto Weighted Round-Robin (PFAWRR) that use functions execution times for delivering serverless functions within a HeteroFaaSD to reduce the overall execution time. We demonstrate and evaluate the functioning of our developed tool on three HeteroFaaSDs using three FaaS platforms: 1) on-premise Open-Whisk, 2) AWS Lambda, and 3) Google Cloud Functions (GCF). We show that Courier can improve the overall performance of the invocations of the functions within a HeteroFaaSD as compared to traditional load balancing algorithms.
随着无服务器计算在不同领域的出现,越来越需要动态适应来处理各种异构功能。然而,无服务器计算目前仅限于同构的功能即服务(FaaS)部署或简单的FaaS部署(FaaSD),包括在具有特定内存配置的区域中使用FaaS平台部署无服务器功能。扩展无服务器计算以支持异构FaaS部署(HeteroFaaSD),异构FaaS部署由多个faasd组成,具有可变配置(FaaS平台、区域和内存),并在HeteroFaaSD内跨这些faasd动态负载平衡功能调用,可以为处理此类无服务器功能提供最佳方式。在本文中,我们提出了一个名为Courier的软件系统,该系统负责根据组成heterofaasd的faasd上的函数执行时间,在heterofaasd内优化分布函数调用(称为无服务器功能交付)。为此,我们开发了两种方法:自动加权轮询(AWRR)和PerFunction自动加权轮询(PFAWRR),它们使用函数执行时间在HeteroFaaSD中交付无服务器功能,以减少总体执行时间。我们使用三个FaaS平台在三个heterofaasd上演示和评估了我们开发的工具的功能:1)本地Open-Whisk, 2) AWS Lambda和3)谷歌Cloud Functions (GCF)。我们表明,与传统的负载平衡算法相比,Courier可以提高HeteroFaaSD中函数调用的整体性能。
{"title":"Courier: delivering serverless functions within heterogeneous FaaS deployments","authors":"Anshul Jindal, Julian Frielinghaus, Mohak Chadha, M. Gerndt","doi":"10.1145/3468737.3494097","DOIUrl":"https://doi.org/10.1145/3468737.3494097","url":null,"abstract":"With the advent of serverless computing in different domains, there is a growing need for dynamic adaption to handle diverse and heterogeneous functions. However, serverless computing is currently limited to homogeneous Function-as-a-Service (FaaS) deployments or simply FaaS Deployment (FaaSD) consisting of deployments of serverless functions using a FaaS platform in a region with certain memory configurations. Extending serverless computing to support Heterogeneous FaaS Deployments (HeteroFaaSDs) consisting of multiple FaaSDs with variable configurations (FaaS platform, region, and memory) and dynamically load balancing the invocations of the functions across these FaaSDs within a HeteroFaaSD can provide an optimal way for handling such serverless functions. In this paper, we present a software system called Courier that is responsible for optimally distributing the invocations of the functions (called delivering of serverless functions) within the HeteroFaaSDs based on the execution time of the functions on the FaaSDs comprising the HeteroFaaSDs. To this end, we developed two approaches: Auto Weighted Round-Robin (AWRR) and PerFunction Auto Weighted Round-Robin (PFAWRR) that use functions execution times for delivering serverless functions within a HeteroFaaSD to reduce the overall execution time. We demonstrate and evaluate the functioning of our developed tool on three HeteroFaaSDs using three FaaS platforms: 1) on-premise Open-Whisk, 2) AWS Lambda, and 3) Google Cloud Functions (GCF). We show that Courier can improve the overall performance of the invocations of the functions within a HeteroFaaSD as compared to traditional load balancing algorithms.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"144 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129605487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Distributed federated service chaining for heterogeneous network environments 异构网络环境的分布式联邦服务链
Chen Chen, Lars Nagel, Lin Cui, Fung Po Tso
Future networks are expected to support cross-domain, cost-aware and fine-grained services in an efficient and flexible manner. Service Function Chaining (SFC) has been introduced as a promising approach to deliver these services. In the literature, centralized resource orchestration is usually employed to process SFC requests and manage computing and network resources. However, centralized approaches inhibit the scalability and domain autonomy in multi-domain networks. They also neglect location and hardware dependencies of service chains. In this paper, we propose federated service chaining, a distributed framework which orchestrates and maintains the SFC placement while sharing a minimal amount of domain information and control. We first formulate a deployment cost minimization problem as an Integer Linear Programming (ILP) problem with fine-grained constraints for location and hardware dependencies, which is NP-hard. We then devise a Distributed Federated Service Chaining placement approach (DFSC) using inter-domain paths and border nodes information. Our extensive experiments demonstrate that DFSC efficiently optimizes the deployment cost, supports domain autonomy and enables faster decision-making. The results show that DFSC finds solutions within a factor 1.15 of the optimal solution. Compared to a centralized approach in the literature, DFSC reduces the deployment cost by 12% while being one order of magnitude faster.
未来的网络有望以高效和灵活的方式支持跨域、成本感知和细粒度的服务。服务功能链(SFC)作为交付这些服务的一种很有前途的方法被引入。在文献中,通常采用集中式资源编排来处理SFC请求,管理计算资源和网络资源。然而,集中式方法抑制了多域网络的可扩展性和域自治。它们还忽略了服务链的位置和硬件依赖关系。在本文中,我们提出了联邦服务链,这是一个分布式框架,它协调和维护SFC放置,同时共享最少量的域信息和控制。我们首先将部署成本最小化问题表述为具有细粒度位置和硬件依赖约束的整数线性规划(ILP)问题,这是np困难的。然后,我们设计了一个使用域间路径和边界节点信息的分布式联邦服务链接放置方法(DFSC)。我们的大量实验表明,DFSC有效地优化了部署成本,支持领域自治,并实现了更快的决策。结果表明,DFSC在最优解的1.15因子范围内找到解。与文献中的集中式方法相比,DFSC将部署成本降低了12%,同时速度提高了一个数量级。
{"title":"Distributed federated service chaining for heterogeneous network environments","authors":"Chen Chen, Lars Nagel, Lin Cui, Fung Po Tso","doi":"10.1145/3468737.3494091","DOIUrl":"https://doi.org/10.1145/3468737.3494091","url":null,"abstract":"Future networks are expected to support cross-domain, cost-aware and fine-grained services in an efficient and flexible manner. Service Function Chaining (SFC) has been introduced as a promising approach to deliver these services. In the literature, centralized resource orchestration is usually employed to process SFC requests and manage computing and network resources. However, centralized approaches inhibit the scalability and domain autonomy in multi-domain networks. They also neglect location and hardware dependencies of service chains. In this paper, we propose federated service chaining, a distributed framework which orchestrates and maintains the SFC placement while sharing a minimal amount of domain information and control. We first formulate a deployment cost minimization problem as an Integer Linear Programming (ILP) problem with fine-grained constraints for location and hardware dependencies, which is NP-hard. We then devise a Distributed Federated Service Chaining placement approach (DFSC) using inter-domain paths and border nodes information. Our extensive experiments demonstrate that DFSC efficiently optimizes the deployment cost, supports domain autonomy and enables faster decision-making. The results show that DFSC finds solutions within a factor 1.15 of the optimal solution. Compared to a centralized approach in the literature, DFSC reduces the deployment cost by 12% while being one order of magnitude faster.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115104768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Amoeba 变形虫
Antonis Papaioannou, K. Magoutis
{"title":"Amoeba","authors":"Antonis Papaioannou, K. Magoutis","doi":"10.2307/j.ctv6wgf4q.56","DOIUrl":"https://doi.org/10.2307/j.ctv6wgf4q.56","url":null,"abstract":"","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"429 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123148919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enforcing deployment latency SLA in edge infrastructures through multi-objective genetic scheduler 通过多目标遗传调度器在边缘基础设施中实施部署延迟SLA
Luis Augusto Dias Knob, C. Kayser, Paulo S. Souza, T. Ferreto
Edge Computing emerged as a solution to new applications, like augmented reality, natural language processing, and data aggregation that relies on requirements that the Cloud does not entirely fulfill. Given that necessity, the application deployment in Edge scenarios usually uses container-based virtualization. When deployed in a resource-constrained infrastructure, the deployment latency to instantiate a container can increase due to bandwidth limitation or bottlenecks, which can significantly impact scenarios where the edge applications have a short life period, high mobility, or interdependence between different microservices. To attack this problem, we propose a novel container scheduler based on a multi-objective genetic algorithm. This scheduler has the main objective of ensuring the Service Level Agreement set on each application that defines when the application is expected to be effectively active in the infrastructure. We also validated our proposal using simulation and evaluate it against two scheduler algorithms, showing a decrease in the number of applications that do not fulfill the SLA and the average time over the SLA to not fulfilled applications.
边缘计算作为增强现实、自然语言处理和数据聚合等新应用的解决方案而出现,这些应用依赖于云计算不能完全满足的需求。考虑到这种必要性,边缘场景中的应用程序部署通常使用基于容器的虚拟化。当部署在资源受限的基础设施中时,由于带宽限制或瓶颈,实例化容器的部署延迟可能会增加,这可能会严重影响边缘应用程序生命周期短、移动性高或不同微服务之间相互依赖的场景。为了解决这一问题,我们提出了一种基于多目标遗传算法的容器调度方法。该调度器的主要目标是确保在每个应用程序上设置服务水平协议(Service Level Agreement),该协议定义了应用程序在基础设施中有效活动的时间。我们还使用模拟验证了我们的建议,并针对两种调度器算法对其进行了评估,结果显示,不满足SLA的应用程序数量有所减少,并且通过SLA到不满足应用程序的平均时间有所减少。
{"title":"Enforcing deployment latency SLA in edge infrastructures through multi-objective genetic scheduler","authors":"Luis Augusto Dias Knob, C. Kayser, Paulo S. Souza, T. Ferreto","doi":"10.1145/3468737.3494100","DOIUrl":"https://doi.org/10.1145/3468737.3494100","url":null,"abstract":"Edge Computing emerged as a solution to new applications, like augmented reality, natural language processing, and data aggregation that relies on requirements that the Cloud does not entirely fulfill. Given that necessity, the application deployment in Edge scenarios usually uses container-based virtualization. When deployed in a resource-constrained infrastructure, the deployment latency to instantiate a container can increase due to bandwidth limitation or bottlenecks, which can significantly impact scenarios where the edge applications have a short life period, high mobility, or interdependence between different microservices. To attack this problem, we propose a novel container scheduler based on a multi-objective genetic algorithm. This scheduler has the main objective of ensuring the Service Level Agreement set on each application that defines when the application is expected to be effectively active in the infrastructure. We also validated our proposal using simulation and evaluate it against two scheduler algorithms, showing a decrease in the number of applications that do not fulfill the SLA and the average time over the SLA to not fulfilled applications.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129610887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1