首页 > 最新文献

2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)最新文献

英文 中文
Dynamic Network Slicing in Fog Computing for Mobile Users in MobFogSim MobFogSim中移动用户雾计算动态网络切片
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00042
Diogo M. Gonçalves, C. Puliafito, E. Mingozzi, O. Rana, L. Bittencourt, E. Madeira
Fog computing provides resources and services in proximity to users. To achieve latency and throughput requirements of mobile users, it may be useful to migrate fog services in accordance with user movement – a scenario referred to as follow me cloud. The frequency of migration can be adapted based on the mobility pattern of a user. In such a scenario, the fog computing infrastructure should simultaneously accommodate users with different characteristics, both in terms of mobility (e.g., route and speed) and Quality of Service requirements (e.g., latency, throughput, and reliability). Migration performance may be improved by leveraging "network slicing", a capability available in Software Defined Networks with Network Function Virtualisation. In this work, we describe how we extended our simulator, called MobFogSim, to support dynamic network slicing and describe how MobFogSim can be used for capacity planning and service management for such mobile fog services. Moreover, we report an experimental evaluation of how dynamic network slicing impacts on container migration to support mobile users in a fog environment. Results show that dynamic network slicing can improve resource utilisation and migration performance in the fog.
雾计算在用户附近提供资源和服务。为了实现移动用户的延迟和吞吐量需求,根据用户的移动迁移雾服务可能是有用的——这种场景称为follow me cloud。迁移的频率可以根据用户的迁移模式进行调整。在这种情况下,雾计算基础设施应同时适应具有不同特征的用户,包括移动性(例如,路由和速度)和服务质量要求(例如,延迟、吞吐量和可靠性)。迁移性能可以通过利用“网络切片”来提高,“网络切片”是软件定义网络中具有网络功能虚拟化的一种功能。在这项工作中,我们描述了我们如何扩展我们的模拟器,称为MobFogSim,以支持动态网络切片,并描述了MobFogSim如何用于此类移动雾服务的容量规划和服务管理。此外,我们报告了动态网络切片如何影响容器迁移的实验评估,以支持雾环境中的移动用户。结果表明,动态网络切片可以提高雾中的资源利用率和迁移性能。
{"title":"Dynamic Network Slicing in Fog Computing for Mobile Users in MobFogSim","authors":"Diogo M. Gonçalves, C. Puliafito, E. Mingozzi, O. Rana, L. Bittencourt, E. Madeira","doi":"10.1109/UCC48980.2020.00042","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00042","url":null,"abstract":"Fog computing provides resources and services in proximity to users. To achieve latency and throughput requirements of mobile users, it may be useful to migrate fog services in accordance with user movement – a scenario referred to as follow me cloud. The frequency of migration can be adapted based on the mobility pattern of a user. In such a scenario, the fog computing infrastructure should simultaneously accommodate users with different characteristics, both in terms of mobility (e.g., route and speed) and Quality of Service requirements (e.g., latency, throughput, and reliability). Migration performance may be improved by leveraging \"network slicing\", a capability available in Software Defined Networks with Network Function Virtualisation. In this work, we describe how we extended our simulator, called MobFogSim, to support dynamic network slicing and describe how MobFogSim can be used for capacity planning and service management for such mobile fog services. Moreover, we report an experimental evaluation of how dynamic network slicing impacts on container migration to support mobile users in a fog environment. Results show that dynamic network slicing can improve resource utilisation and migration performance in the fog.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115964475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
DisGB: Using Geo-Context Information for Efficient Routing in Geo-Distributed Pub/Sub Systems DisGB:在地理分布式Pub/Sub系统中使用地理上下文信息实现高效路由
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00026
Jonathan Hasenburg, David Bermbach
IoT data are usually exchanged via pub/sub, e.g., based on the MQTT protocol. Especially in the IoT, however, the relevance of data often depends on the geo-context, e.g., the location of data source and sink. In this paper, we propose two inter-broker routing strategies that use this characteristic for the selection of rendezvous points. We evaluate analytically and through experiments with a distributed pub/sub prototype which strategy is best suited in three IoT scenarios. Based on simulation, we compare the performance and efficiency of our approach to the state of the art: Our strategies reduce the event delivery latency by up to 22 times compared to the only alternative that sends slightly fewer messages. Our strategies also require significantly less inter-broker messages than all other approaches while achieving at least the same performance.
物联网数据通常通过pub/sub交换,例如基于MQTT协议。然而,特别是在物联网中,数据的相关性通常取决于地理环境,例如数据源和接收器的位置。在本文中,我们提出了两种利用这一特性选择集合点的代理间路由策略。我们通过分布式发布/订阅原型进行分析评估,并通过实验评估哪种策略最适合三种物联网场景。在模拟的基础上,我们将我们的方法的性能和效率与最先进的方法进行了比较:与唯一的替代方法相比,我们的策略将事件交付延迟减少了22倍,而唯一的替代方法发送的消息略少。与所有其他方法相比,我们的策略还需要更少的代理间消息,同时至少实现相同的性能。
{"title":"DisGB: Using Geo-Context Information for Efficient Routing in Geo-Distributed Pub/Sub Systems","authors":"Jonathan Hasenburg, David Bermbach","doi":"10.1109/UCC48980.2020.00026","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00026","url":null,"abstract":"IoT data are usually exchanged via pub/sub, e.g., based on the MQTT protocol. Especially in the IoT, however, the relevance of data often depends on the geo-context, e.g., the location of data source and sink. In this paper, we propose two inter-broker routing strategies that use this characteristic for the selection of rendezvous points. We evaluate analytically and through experiments with a distributed pub/sub prototype which strategy is best suited in three IoT scenarios. Based on simulation, we compare the performance and efficiency of our approach to the state of the art: Our strategies reduce the event delivery latency by up to 22 times compared to the only alternative that sends slightly fewer messages. Our strategies also require significantly less inter-broker messages than all other approaches while achieving at least the same performance.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126883469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Group Mutual Exclusion to Scale Distributed Stream Processing Pipelines 组互斥扩展分布式流处理管道
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00043
Mehdi Belkhiria, M. Bertier, Cédric Tedeschi
Stream Processing has become the de facto standard way of supporting real-time data analytics. Stream Processing applications are typically shaped as pipelines of operators, each record of the stream traversing all the operators of the graph. The placement of these operators on nodes of the platform can evolve through time according to different parameters such as the velocity of the input stream and the capacity of nodes. Such an adaptation calls for mechanisms such as dynamic operator scaling and migration. With the advent of Fog Computing, gathering multiple computationally-limited geographically-distributed resources, these mechanisms need to be decentralized, as a central coordinator orchestrating these actions is not a scalable solution any more.In a fully decentralized vision, each node hosts part of the pipeline. Each node is responsible for the scaling of the operators it runs. More precisely speaking, nodes trigger new instances of the operators they runs or shut some of them down. The number of replicas of each operator evolving independently, there is a need to maintain the connections between nodes hosting neighbouring operators in the pipeline. One issue is that, if all these operators can scale in or out dynamically, maintaining a consistent view of their neighbours becomes difficult, calling for synchronization mechanisms to ensure it, to avoid routing inconsistencies and data loss.In this paper, we show that this synchronization problem translate into a particular Group Mutual Exclusion (GME) problem where a group comprises all instances of a given operator of the pipeline and where conflicting groups are those hosting neighbouring operators in the pipeline. The specificity of our problem is that groups are fixed and that each group is in conflict with only one other groups at a time. Based on these constraints, we formulate a new GME algorithm whose message complexity is reduced when compared to algorithms of the literature, while being able to ensure a high level of concurrent occupancy (the number of processes of the same group in the critical section (the scaling mechanism) at the same time.
流处理已经成为支持实时数据分析的事实上的标准方式。流处理应用程序通常被塑造为操作符的管道,流的每个记录遍历图的所有操作符。这些操作符在平台节点上的位置可以根据输入流的速度和节点的容量等不同参数随时间而变化。这种适应需要动态算子缩放和迁移等机制。随着雾计算的出现,收集多个计算有限的地理分布资源,这些机制需要去中心化,因为协调这些操作的中央协调器不再是可扩展的解决方案。在完全去中心化的视图中,每个节点承载管道的一部分。每个节点负责其运行的操作符的缩放。更准确地说,节点触发它们运行的操作符的新实例,或者关闭其中一些操作符。由于每个操作符的副本数量独立发展,因此需要维护管道中承载相邻操作符的节点之间的连接。一个问题是,如果所有这些操作符都可以动态伸缩,那么维护其邻居的一致视图将变得困难,需要同步机制来确保这一点,以避免路由不一致和数据丢失。在本文中,我们证明了这个同步问题转化为一个特定的组互斥(GME)问题,其中一个组包含管道中给定操作符的所有实例,其中冲突组是管道中承载相邻操作符的组。我们的问题的特殊性在于群体是固定的,每个群体一次只与另一个群体发生冲突。基于这些约束,我们提出了一种新的GME算法,该算法与文献中的算法相比,降低了消息复杂度,同时能够保证高水平的并发占用(临界区域内同一组的进程数(扩展机制))。
{"title":"Group Mutual Exclusion to Scale Distributed Stream Processing Pipelines","authors":"Mehdi Belkhiria, M. Bertier, Cédric Tedeschi","doi":"10.1109/UCC48980.2020.00043","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00043","url":null,"abstract":"Stream Processing has become the de facto standard way of supporting real-time data analytics. Stream Processing applications are typically shaped as pipelines of operators, each record of the stream traversing all the operators of the graph. The placement of these operators on nodes of the platform can evolve through time according to different parameters such as the velocity of the input stream and the capacity of nodes. Such an adaptation calls for mechanisms such as dynamic operator scaling and migration. With the advent of Fog Computing, gathering multiple computationally-limited geographically-distributed resources, these mechanisms need to be decentralized, as a central coordinator orchestrating these actions is not a scalable solution any more.In a fully decentralized vision, each node hosts part of the pipeline. Each node is responsible for the scaling of the operators it runs. More precisely speaking, nodes trigger new instances of the operators they runs or shut some of them down. The number of replicas of each operator evolving independently, there is a need to maintain the connections between nodes hosting neighbouring operators in the pipeline. One issue is that, if all these operators can scale in or out dynamically, maintaining a consistent view of their neighbours becomes difficult, calling for synchronization mechanisms to ensure it, to avoid routing inconsistencies and data loss.In this paper, we show that this synchronization problem translate into a particular Group Mutual Exclusion (GME) problem where a group comprises all instances of a given operator of the pipeline and where conflicting groups are those hosting neighbouring operators in the pipeline. The specificity of our problem is that groups are fixed and that each group is in conflict with only one other groups at a time. Based on these constraints, we formulate a new GME algorithm whose message complexity is reduced when compared to algorithms of the literature, while being able to ensure a high level of concurrent occupancy (the number of processes of the same group in the critical section (the scaling mechanism) at the same time.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121470486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Light-Weight Approach to Software Assignment at the Edge 一种轻量级的边缘软件分配方法
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00060
R. Dautov, Hui Song, Nicolas Ferry
Containerised software running on edge infrastructures is required to be updated following agile practices to react to emerging business requirements, contextual changes, and security threats. Which version needs to be deployed on a particular device depends on multiple context properties, such as hardware/software resources, physical environment, user preferences, subscription type, etc. As fleets of edge devices are nowadays comprised of thousands of units, the amount of effort required to perform such assignment often goes beyond manual capabilities, and automating this assignment task is an important pre-requisite for application providers to implement continuous software delivery. This paper looks at this challenge as a generalised assignment problem and demonstrates how it can be solved using simple, yet efficient combinatorial optimisation techniques. The proof of concept implementation demonstrates the general viability of the approach, as well as its performance and scalability through a series of benchmarking experiments.
运行在边缘基础设施上的容器化软件需要按照敏捷实践进行更新,以应对新出现的业务需求、上下文变化和安全威胁。需要在特定设备上部署哪个版本取决于多个上下文属性,例如硬件/软件资源、物理环境、用户偏好、订阅类型等。由于现在的边缘设备由数千个单元组成,执行此类分配所需的工作量通常超出了手动能力,而自动化此分配任务是应用程序提供商实现持续软件交付的重要先决条件。本文将这一挑战视为一个广义的分配问题,并演示了如何使用简单而有效的组合优化技术来解决它。概念验证实现通过一系列基准测试实验证明了该方法的总体可行性,以及其性能和可扩展性。
{"title":"A Light-Weight Approach to Software Assignment at the Edge","authors":"R. Dautov, Hui Song, Nicolas Ferry","doi":"10.1109/UCC48980.2020.00060","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00060","url":null,"abstract":"Containerised software running on edge infrastructures is required to be updated following agile practices to react to emerging business requirements, contextual changes, and security threats. Which version needs to be deployed on a particular device depends on multiple context properties, such as hardware/software resources, physical environment, user preferences, subscription type, etc. As fleets of edge devices are nowadays comprised of thousands of units, the amount of effort required to perform such assignment often goes beyond manual capabilities, and automating this assignment task is an important pre-requisite for application providers to implement continuous software delivery. This paper looks at this challenge as a generalised assignment problem and demonstrates how it can be solved using simple, yet efficient combinatorial optimisation techniques. The proof of concept implementation demonstrates the general viability of the approach, as well as its performance and scalability through a series of benchmarking experiments.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132492842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance, Power, and Energy-Efficiency Impact Analysis of Compiler Optimizations on the SPEC CPU 2017 Benchmark Suite 编译器优化对SPEC CPU 2017基准测试套件的性能、功耗和能效影响分析
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00047
Norbert Schmitt, James Bucek, John Beckett, Aaron Cragin, K. Lange, Samuel Kounev
The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software must become more energy efficient. Software has a major impact on hardware utilization levels, and subsequently, the energy efficiency. While energy efficiency is often seen as identical to performance, we argue that this may not be necessarily the case. A sizable amount of energy could be saved, increasing energy efficiency by leveraging compiler optimizations but at the same time impacting performance and power consumption over time. We analyze the SPEC CPU 2017 benchmark suite with 43 benchmarks from different domains, including integer and floating-point heavy computations on a state-of-the-art server system for cloud applications. Our results show that power consumption displays more stable behavior if less compiler optimizations are used and also confirmed that performance and energy efficiency are different optimizations goals. Additionally, compiler optimizations possibly could be used to enable power capping on a software level and care must be taken when selecting such optimizations.
云服务的增长导致越来越多的数据中心变得越来越大,并消耗大量的电力。为了提高能源效率,实际的服务器设备和软件都必须变得更加节能。软件对硬件的利用水平以及随后的能源效率有很大的影响。虽然能效通常被视为等同于性能,但我们认为情况未必如此。可以节省大量的能源,通过利用编译器优化来提高能源效率,但同时随着时间的推移会影响性能和功耗。我们分析了SPEC CPU 2017基准测试套件,其中包括来自不同领域的43个基准测试,包括在最先进的云应用服务器系统上的整数和浮点繁重计算。我们的结果表明,如果使用较少的编译器优化,则功耗显示出更稳定的行为,并且还证实了性能和能源效率是不同的优化目标。此外,编译器优化可能用于在软件级别启用功率上限,在选择此类优化时必须小心。
{"title":"Performance, Power, and Energy-Efficiency Impact Analysis of Compiler Optimizations on the SPEC CPU 2017 Benchmark Suite","authors":"Norbert Schmitt, James Bucek, John Beckett, Aaron Cragin, K. Lange, Samuel Kounev","doi":"10.1109/UCC48980.2020.00047","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00047","url":null,"abstract":"The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software must become more energy efficient. Software has a major impact on hardware utilization levels, and subsequently, the energy efficiency. While energy efficiency is often seen as identical to performance, we argue that this may not be necessarily the case. A sizable amount of energy could be saved, increasing energy efficiency by leveraging compiler optimizations but at the same time impacting performance and power consumption over time. We analyze the SPEC CPU 2017 benchmark suite with 43 benchmarks from different domains, including integer and floating-point heavy computations on a state-of-the-art server system for cloud applications. Our results show that power consumption displays more stable behavior if less compiler optimizations are used and also confirmed that performance and energy efficiency are different optimizations goals. Additionally, compiler optimizations possibly could be used to enable power capping on a software level and care must be taken when selecting such optimizations.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117114877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Blockchain Mobility Solution for Charging Transactions of Electrical Vehicles 电动汽车充电交易区块链移动解决方案
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00055
Ahmed Afif Monrat, O. Schelén, K. Andersson
Many countries in Europe are adopting a deregulated system where prosumers can subscribe with any energy supplier in an open market, independently of location. However, the mobility aspect of transactions in the existing system is not satisfactorily covered. For instance, if a person receives the service of charging an EV from a prosumer’s local outlet, he cannot pay to the prosumer directly without the presence of an intermediary system. This has led to a situation where the EV owners need to have a large number of subscriptions for EV charging providers and visitors cannot pay for the electricity used there. This study evaluates this mobility gap and proposes a solution for charging transactions using blockchain technology. Furthermore, we implement a proof of concept using the Hyperledger consortium platform for the technical feasibility of the proposed approach and evaluate the performance metrics such as transaction latency and throughput.
欧洲许多国家正在采取一种放松管制的制度,在这种制度下,生产消费者可以在一个开放的市场上向任何一家能源供应商订购能源,而不受地点的限制。但是,现有系统中交易的流动性方面没有得到令人满意的处理。例如,如果一个人从一个产消者的当地网点接受电动汽车充电服务,如果没有中介系统的存在,他就不能直接向产消者付款。这导致了这样一种情况,即电动汽车车主需要为电动汽车充电提供商订购大量的服务,而游客无法支付那里的电费。本研究评估了这种移动性差距,并提出了使用区块链技术收费交易的解决方案。此外,我们使用Hyperledger联盟平台实现了概念验证,以验证所提议方法的技术可行性,并评估了交易延迟和吞吐量等性能指标。
{"title":"Blockchain Mobility Solution for Charging Transactions of Electrical Vehicles","authors":"Ahmed Afif Monrat, O. Schelén, K. Andersson","doi":"10.1109/UCC48980.2020.00055","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00055","url":null,"abstract":"Many countries in Europe are adopting a deregulated system where prosumers can subscribe with any energy supplier in an open market, independently of location. However, the mobility aspect of transactions in the existing system is not satisfactorily covered. For instance, if a person receives the service of charging an EV from a prosumer’s local outlet, he cannot pay to the prosumer directly without the presence of an intermediary system. This has led to a situation where the EV owners need to have a large number of subscriptions for EV charging providers and visitors cannot pay for the electricity used there. This study evaluates this mobility gap and proposes a solution for charging transactions using blockchain technology. Furthermore, we implement a proof of concept using the Hyperledger consortium platform for the technical feasibility of the proposed approach and evaluate the performance metrics such as transaction latency and throughput.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114283634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Message from the CloudAM 2020 Workshop Chairs 2020年CloudAM研讨会主席致辞
Pub Date : 2020-12-01 DOI: 10.1109/ucc48980.2020.00013
L. Bittencourt
Welcome to the ninth edition of the International Workshop on Cloud and Edge Computing, and Applications Management (CloudAM 2020). The maturation of the cloud computing paradigm brought all kinds of applications to be deployed, totally or partially, in the cloud. Cloud research has been recently evolving to three different views: improve applications already running in the cloud, move applications to the cloud, and also making the distributed infrastructure suitable for new types of applications with different requirements. This includes applications that run in mobile devices and also internet of things (IoT) applications, which can require lower latencies or more processing capacity closer to the edge of the network, resulting in a distributed infrastructure that complements the centralized cloud data centres.
欢迎参加第九届云计算、边缘计算和应用程序管理国际研讨会(CloudAM 2020)。云计算范式的成熟使得所有类型的应用程序全部或部分地部署在云中。云研究最近已经发展到三种不同的观点:改进已经在云中运行的应用程序,将应用程序迁移到云中,以及使分布式基础设施适合具有不同需求的新型应用程序。这包括在移动设备和物联网(IoT)应用程序中运行的应用程序,这些应用程序可能需要更低的延迟或更接近网络边缘的更多处理能力,从而形成分布式基础设施,以补充集中式云数据中心。
{"title":"Message from the CloudAM 2020 Workshop Chairs","authors":"L. Bittencourt","doi":"10.1109/ucc48980.2020.00013","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00013","url":null,"abstract":"Welcome to the ninth edition of the International Workshop on Cloud and Edge Computing, and Applications Management (CloudAM 2020). The maturation of the cloud computing paradigm brought all kinds of applications to be deployed, totally or partially, in the cloud. Cloud research has been recently evolving to three different views: improve applications already running in the cloud, move applications to the cloud, and also making the distributed infrastructure suitable for new types of applications with different requirements. This includes applications that run in mobile devices and also internet of things (IoT) applications, which can require lower latencies or more processing capacity closer to the edge of the network, resulting in a distributed infrastructure that complements the centralized cloud data centres.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125790172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the RTDPCC 2020 Workshop Chairs RTDPCC 2020研讨会主席致辞
Pub Date : 2020-12-01 DOI: 10.1109/ucc48980.2020.00014
X. Zhai
will provide a forum to discuss fundamental issues on research and development of real-time data processing for cloud computing as well as challenges in the design and implementation of novel real-time data processing algorithms, neural networks, architectures and systems for sensor networks, healthcare systems and Internet-of-Things (IoT). The RTDPCC-2020 provide a wonderful forum for you to refresh your knowledge base and explore the innovations in the relevant research fields. The symposium and the main conference event will strive to offer plenty of networking opportunities, including meeting and interacting with the leading scientists and researchers, and colleagues as well as and UK, China, USA, Qatar, Greece, and other We are the committee, very hard in reviewing papers and providing feedback to authors. Finally, we thank the hosting organization and the We the symposium will you a valuable opportunity to share ideas with other researchers and practitioners from institutions around the world. We the symposium complements perfectly the topical focus of UCC-2020 and provides additional breadth and depth to the main conference. Finally, we hope you enjoy the workshop and have a fruitful meeting in Leicester, UK.
将提供一个论坛,讨论云计算实时数据处理研究和开发的基本问题,以及设计和实现新型实时数据处理算法、神经网络、传感器网络、医疗保健系统和物联网(IoT)的架构和系统所面临的挑战。RTDPCC-2020为您提供了一个更新知识库和探索相关研究领域创新的精彩论坛。研讨会和主要会议活动将努力提供大量的交流机会,包括与英国、中国、美国、卡塔尔、希腊和其他国家的主要科学家和研究人员以及同事会面和互动。我们是委员会,非常努力地审查论文并向作者提供反馈。最后,我们感谢主办机构和本次研讨会将为您提供一个宝贵的机会,与来自世界各地机构的其他研究人员和实践者交流思想。我们的研讨会完美地补充了UCC-2020的主题重点,并为主要会议提供了额外的广度和深度。最后,我们希望大家喜欢这次研讨会,并在英国莱斯特举行一次富有成果的会议。
{"title":"Message from the RTDPCC 2020 Workshop Chairs","authors":"X. Zhai","doi":"10.1109/ucc48980.2020.00014","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00014","url":null,"abstract":"will provide a forum to discuss fundamental issues on research and development of real-time data processing for cloud computing as well as challenges in the design and implementation of novel real-time data processing algorithms, neural networks, architectures and systems for sensor networks, healthcare systems and Internet-of-Things (IoT). The RTDPCC-2020 provide a wonderful forum for you to refresh your knowledge base and explore the innovations in the relevant research fields. The symposium and the main conference event will strive to offer plenty of networking opportunities, including meeting and interacting with the leading scientists and researchers, and colleagues as well as and UK, China, USA, Qatar, Greece, and other We are the committee, very hard in reviewing papers and providing feedback to authors. Finally, we thank the hosting organization and the We the symposium will you a valuable opportunity to share ideas with other researchers and practitioners from institutions around the world. We the symposium complements perfectly the topical focus of UCC-2020 and provides additional breadth and depth to the main conference. Finally, we hope you enjoy the workshop and have a fruitful meeting in Leicester, UK.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rule-Based Resource Matchmaking for Composite Application Deployments across IoT-Fog-Cloud Continuums 物联网-雾云连续体复合应用部署的基于规则的资源匹配
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00053
Josef Spillner, Panagiotis Gkikopoulos, Alina Buzachis, M. Villari
Where shall my new shiny application run? Hundreds of such questions are asked by software engineers who have many cloud services at their disposition, but increasingly also many other hosting options around managed edge devices and fog spectrums, including for functions and container hosting (FaaS/CaaS). Especially for composite applications prevalent in this field, the combinatorial deployment space is exploding. We claim that a systematic and automated approach is unavoidable in order to scale functional decomposition applications further so that each hosting facility is fully exploited. To support engineers while they transition from cloud-native to continuum-native, we provide a rule-based matchmaker called RBMM that combines several decision factors typically present in software description formats and applies rules to them. Using the MaestroNG orchestrator and OsmoticToolkit, we also contribute an integration of the matchmaker into an actual deployment environment.
我的新闪亮应用程序将在哪里运行?拥有许多云服务的软件工程师提出了数百个这样的问题,但围绕管理边缘设备和雾频谱的其他托管选项也越来越多,包括功能和容器托管(FaaS/CaaS)。特别是对于这个领域中流行的组合应用程序,组合部署空间正在爆炸式增长。我们声称,为了进一步扩展功能分解应用程序,以便充分利用每个托管设施,系统和自动化的方法是不可避免的。为了支持工程师从云原生到连续原生的过渡,我们提供了一个基于规则的配对器,称为RBMM,它结合了软件描述格式中通常存在的几个决策因素,并将规则应用于它们。使用MaestroNG编排器和OsmoticToolkit,我们还将matchmaker集成到实际的部署环境中。
{"title":"Rule-Based Resource Matchmaking for Composite Application Deployments across IoT-Fog-Cloud Continuums","authors":"Josef Spillner, Panagiotis Gkikopoulos, Alina Buzachis, M. Villari","doi":"10.1109/UCC48980.2020.00053","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00053","url":null,"abstract":"Where shall my new shiny application run? Hundreds of such questions are asked by software engineers who have many cloud services at their disposition, but increasingly also many other hosting options around managed edge devices and fog spectrums, including for functions and container hosting (FaaS/CaaS). Especially for composite applications prevalent in this field, the combinatorial deployment space is exploding. We claim that a systematic and automated approach is unavoidable in order to scale functional decomposition applications further so that each hosting facility is fully exploited. To support engineers while they transition from cloud-native to continuum-native, we provide a rule-based matchmaker called RBMM that combines several decision factors typically present in software description formats and applies rules to them. Using the MaestroNG orchestrator and OsmoticToolkit, we also contribute an integration of the matchmaker into an actual deployment environment.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131346760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust Resource Scaling of Containerized Microservices with Probabilistic Machine learning 基于概率机器学习的容器化微服务鲁棒资源扩展
Pub Date : 2020-12-01 DOI: 10.1109/UCC48980.2020.00031
Peng Kang, P. Lama
Large-scale web services are increasingly being built with many small modular components (microservices), which can be deployed, updated and scaled seamlessly. These microservices are packaged to run in a lightweight isolated execution environment (containers) and deployed on computing resources rented from cloud providers. However, the complex interactions and the contention of shared hardware resources in cloud data centers pose significant challenges in managing web service performance. In this paper, we present RScale, a robust resource scaling system that provides end-to-end performance guarantee for containerized microservices deployed in the cloud. RScale employs a probabilistic machine learning-based performance model, which can quickly adapt to changing system dynamics and directly provide confidence bounds in the predictions with minimal overhead. It leverages multi-layered data collected from container-level resource usage metrics and virtual machine-level hardware performance counter metrics to capture changing resource demands in the presence of multi-tenant performance interference. We implemented and evaluated RScale on NSF Cloud's Chameleon testbed using KVM for virtualization, Docker Engine for containerization and Kubernetes for container orchestration. Experimental results with an open-source microservices benchmark, Robot Shop, demonstrate the superior prediction accuracy and adaptiveness of our modeling approach compared to popular machine learning techniques. RScale meets the performance SLO (service-level-objective) targets for various microservice workflows even in the presence of multi-tenant performance interference and changing system dynamics.
大规模web服务越来越多地由许多小型模块化组件(微服务)构建,这些组件可以无缝地部署、更新和扩展。这些微服务被打包在轻量级的隔离执行环境(容器)中运行,并部署在从云提供商租用的计算资源上。然而,云数据中心中复杂的交互和共享硬件资源的争用在管理web服务性能方面提出了重大挑战。在本文中,我们介绍了RScale,一个强大的资源扩展系统,为部署在云中的容器化微服务提供端到端的性能保证。RScale采用基于概率机器学习的性能模型,该模型可以快速适应不断变化的系统动态,并以最小的开销直接提供预测的置信范围。它利用从容器级资源使用指标和虚拟机级硬件性能计数器指标收集的多层数据,在存在多租户性能干扰的情况下捕获不断变化的资源需求。我们在NSF Cloud的变色龙测试平台上实现并评估了RScale,使用KVM进行虚拟化,Docker引擎进行容器化,Kubernetes进行容器编排。基于开源微服务基准Robot Shop的实验结果表明,与流行的机器学习技术相比,我们的建模方法具有优越的预测准确性和适应性。RScale满足各种微服务工作流的性能SLO(服务级目标)目标,即使存在多租户性能干扰和不断变化的系统动态。
{"title":"Robust Resource Scaling of Containerized Microservices with Probabilistic Machine learning","authors":"Peng Kang, P. Lama","doi":"10.1109/UCC48980.2020.00031","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00031","url":null,"abstract":"Large-scale web services are increasingly being built with many small modular components (microservices), which can be deployed, updated and scaled seamlessly. These microservices are packaged to run in a lightweight isolated execution environment (containers) and deployed on computing resources rented from cloud providers. However, the complex interactions and the contention of shared hardware resources in cloud data centers pose significant challenges in managing web service performance. In this paper, we present RScale, a robust resource scaling system that provides end-to-end performance guarantee for containerized microservices deployed in the cloud. RScale employs a probabilistic machine learning-based performance model, which can quickly adapt to changing system dynamics and directly provide confidence bounds in the predictions with minimal overhead. It leverages multi-layered data collected from container-level resource usage metrics and virtual machine-level hardware performance counter metrics to capture changing resource demands in the presence of multi-tenant performance interference. We implemented and evaluated RScale on NSF Cloud's Chameleon testbed using KVM for virtualization, Docker Engine for containerization and Kubernetes for container orchestration. Experimental results with an open-source microservices benchmark, Robot Shop, demonstrate the superior prediction accuracy and adaptiveness of our modeling approach compared to popular machine learning techniques. RScale meets the performance SLO (service-level-objective) targets for various microservice workflows even in the presence of multi-tenant performance interference and changing system dynamics.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124697785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1