Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00042
Diogo M. Gonçalves, C. Puliafito, E. Mingozzi, O. Rana, L. Bittencourt, E. Madeira
Fog computing provides resources and services in proximity to users. To achieve latency and throughput requirements of mobile users, it may be useful to migrate fog services in accordance with user movement – a scenario referred to as follow me cloud. The frequency of migration can be adapted based on the mobility pattern of a user. In such a scenario, the fog computing infrastructure should simultaneously accommodate users with different characteristics, both in terms of mobility (e.g., route and speed) and Quality of Service requirements (e.g., latency, throughput, and reliability). Migration performance may be improved by leveraging "network slicing", a capability available in Software Defined Networks with Network Function Virtualisation. In this work, we describe how we extended our simulator, called MobFogSim, to support dynamic network slicing and describe how MobFogSim can be used for capacity planning and service management for such mobile fog services. Moreover, we report an experimental evaluation of how dynamic network slicing impacts on container migration to support mobile users in a fog environment. Results show that dynamic network slicing can improve resource utilisation and migration performance in the fog.
雾计算在用户附近提供资源和服务。为了实现移动用户的延迟和吞吐量需求,根据用户的移动迁移雾服务可能是有用的——这种场景称为follow me cloud。迁移的频率可以根据用户的迁移模式进行调整。在这种情况下,雾计算基础设施应同时适应具有不同特征的用户,包括移动性(例如,路由和速度)和服务质量要求(例如,延迟、吞吐量和可靠性)。迁移性能可以通过利用“网络切片”来提高,“网络切片”是软件定义网络中具有网络功能虚拟化的一种功能。在这项工作中,我们描述了我们如何扩展我们的模拟器,称为MobFogSim,以支持动态网络切片,并描述了MobFogSim如何用于此类移动雾服务的容量规划和服务管理。此外,我们报告了动态网络切片如何影响容器迁移的实验评估,以支持雾环境中的移动用户。结果表明,动态网络切片可以提高雾中的资源利用率和迁移性能。
{"title":"Dynamic Network Slicing in Fog Computing for Mobile Users in MobFogSim","authors":"Diogo M. Gonçalves, C. Puliafito, E. Mingozzi, O. Rana, L. Bittencourt, E. Madeira","doi":"10.1109/UCC48980.2020.00042","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00042","url":null,"abstract":"Fog computing provides resources and services in proximity to users. To achieve latency and throughput requirements of mobile users, it may be useful to migrate fog services in accordance with user movement – a scenario referred to as follow me cloud. The frequency of migration can be adapted based on the mobility pattern of a user. In such a scenario, the fog computing infrastructure should simultaneously accommodate users with different characteristics, both in terms of mobility (e.g., route and speed) and Quality of Service requirements (e.g., latency, throughput, and reliability). Migration performance may be improved by leveraging \"network slicing\", a capability available in Software Defined Networks with Network Function Virtualisation. In this work, we describe how we extended our simulator, called MobFogSim, to support dynamic network slicing and describe how MobFogSim can be used for capacity planning and service management for such mobile fog services. Moreover, we report an experimental evaluation of how dynamic network slicing impacts on container migration to support mobile users in a fog environment. Results show that dynamic network slicing can improve resource utilisation and migration performance in the fog.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115964475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00026
Jonathan Hasenburg, David Bermbach
IoT data are usually exchanged via pub/sub, e.g., based on the MQTT protocol. Especially in the IoT, however, the relevance of data often depends on the geo-context, e.g., the location of data source and sink. In this paper, we propose two inter-broker routing strategies that use this characteristic for the selection of rendezvous points. We evaluate analytically and through experiments with a distributed pub/sub prototype which strategy is best suited in three IoT scenarios. Based on simulation, we compare the performance and efficiency of our approach to the state of the art: Our strategies reduce the event delivery latency by up to 22 times compared to the only alternative that sends slightly fewer messages. Our strategies also require significantly less inter-broker messages than all other approaches while achieving at least the same performance.
{"title":"DisGB: Using Geo-Context Information for Efficient Routing in Geo-Distributed Pub/Sub Systems","authors":"Jonathan Hasenburg, David Bermbach","doi":"10.1109/UCC48980.2020.00026","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00026","url":null,"abstract":"IoT data are usually exchanged via pub/sub, e.g., based on the MQTT protocol. Especially in the IoT, however, the relevance of data often depends on the geo-context, e.g., the location of data source and sink. In this paper, we propose two inter-broker routing strategies that use this characteristic for the selection of rendezvous points. We evaluate analytically and through experiments with a distributed pub/sub prototype which strategy is best suited in three IoT scenarios. Based on simulation, we compare the performance and efficiency of our approach to the state of the art: Our strategies reduce the event delivery latency by up to 22 times compared to the only alternative that sends slightly fewer messages. Our strategies also require significantly less inter-broker messages than all other approaches while achieving at least the same performance.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126883469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00043
Mehdi Belkhiria, M. Bertier, Cédric Tedeschi
Stream Processing has become the de facto standard way of supporting real-time data analytics. Stream Processing applications are typically shaped as pipelines of operators, each record of the stream traversing all the operators of the graph. The placement of these operators on nodes of the platform can evolve through time according to different parameters such as the velocity of the input stream and the capacity of nodes. Such an adaptation calls for mechanisms such as dynamic operator scaling and migration. With the advent of Fog Computing, gathering multiple computationally-limited geographically-distributed resources, these mechanisms need to be decentralized, as a central coordinator orchestrating these actions is not a scalable solution any more.In a fully decentralized vision, each node hosts part of the pipeline. Each node is responsible for the scaling of the operators it runs. More precisely speaking, nodes trigger new instances of the operators they runs or shut some of them down. The number of replicas of each operator evolving independently, there is a need to maintain the connections between nodes hosting neighbouring operators in the pipeline. One issue is that, if all these operators can scale in or out dynamically, maintaining a consistent view of their neighbours becomes difficult, calling for synchronization mechanisms to ensure it, to avoid routing inconsistencies and data loss.In this paper, we show that this synchronization problem translate into a particular Group Mutual Exclusion (GME) problem where a group comprises all instances of a given operator of the pipeline and where conflicting groups are those hosting neighbouring operators in the pipeline. The specificity of our problem is that groups are fixed and that each group is in conflict with only one other groups at a time. Based on these constraints, we formulate a new GME algorithm whose message complexity is reduced when compared to algorithms of the literature, while being able to ensure a high level of concurrent occupancy (the number of processes of the same group in the critical section (the scaling mechanism) at the same time.
{"title":"Group Mutual Exclusion to Scale Distributed Stream Processing Pipelines","authors":"Mehdi Belkhiria, M. Bertier, Cédric Tedeschi","doi":"10.1109/UCC48980.2020.00043","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00043","url":null,"abstract":"Stream Processing has become the de facto standard way of supporting real-time data analytics. Stream Processing applications are typically shaped as pipelines of operators, each record of the stream traversing all the operators of the graph. The placement of these operators on nodes of the platform can evolve through time according to different parameters such as the velocity of the input stream and the capacity of nodes. Such an adaptation calls for mechanisms such as dynamic operator scaling and migration. With the advent of Fog Computing, gathering multiple computationally-limited geographically-distributed resources, these mechanisms need to be decentralized, as a central coordinator orchestrating these actions is not a scalable solution any more.In a fully decentralized vision, each node hosts part of the pipeline. Each node is responsible for the scaling of the operators it runs. More precisely speaking, nodes trigger new instances of the operators they runs or shut some of them down. The number of replicas of each operator evolving independently, there is a need to maintain the connections between nodes hosting neighbouring operators in the pipeline. One issue is that, if all these operators can scale in or out dynamically, maintaining a consistent view of their neighbours becomes difficult, calling for synchronization mechanisms to ensure it, to avoid routing inconsistencies and data loss.In this paper, we show that this synchronization problem translate into a particular Group Mutual Exclusion (GME) problem where a group comprises all instances of a given operator of the pipeline and where conflicting groups are those hosting neighbouring operators in the pipeline. The specificity of our problem is that groups are fixed and that each group is in conflict with only one other groups at a time. Based on these constraints, we formulate a new GME algorithm whose message complexity is reduced when compared to algorithms of the literature, while being able to ensure a high level of concurrent occupancy (the number of processes of the same group in the critical section (the scaling mechanism) at the same time.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121470486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00060
R. Dautov, Hui Song, Nicolas Ferry
Containerised software running on edge infrastructures is required to be updated following agile practices to react to emerging business requirements, contextual changes, and security threats. Which version needs to be deployed on a particular device depends on multiple context properties, such as hardware/software resources, physical environment, user preferences, subscription type, etc. As fleets of edge devices are nowadays comprised of thousands of units, the amount of effort required to perform such assignment often goes beyond manual capabilities, and automating this assignment task is an important pre-requisite for application providers to implement continuous software delivery. This paper looks at this challenge as a generalised assignment problem and demonstrates how it can be solved using simple, yet efficient combinatorial optimisation techniques. The proof of concept implementation demonstrates the general viability of the approach, as well as its performance and scalability through a series of benchmarking experiments.
{"title":"A Light-Weight Approach to Software Assignment at the Edge","authors":"R. Dautov, Hui Song, Nicolas Ferry","doi":"10.1109/UCC48980.2020.00060","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00060","url":null,"abstract":"Containerised software running on edge infrastructures is required to be updated following agile practices to react to emerging business requirements, contextual changes, and security threats. Which version needs to be deployed on a particular device depends on multiple context properties, such as hardware/software resources, physical environment, user preferences, subscription type, etc. As fleets of edge devices are nowadays comprised of thousands of units, the amount of effort required to perform such assignment often goes beyond manual capabilities, and automating this assignment task is an important pre-requisite for application providers to implement continuous software delivery. This paper looks at this challenge as a generalised assignment problem and demonstrates how it can be solved using simple, yet efficient combinatorial optimisation techniques. The proof of concept implementation demonstrates the general viability of the approach, as well as its performance and scalability through a series of benchmarking experiments.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132492842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00047
Norbert Schmitt, James Bucek, John Beckett, Aaron Cragin, K. Lange, Samuel Kounev
The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software must become more energy efficient. Software has a major impact on hardware utilization levels, and subsequently, the energy efficiency. While energy efficiency is often seen as identical to performance, we argue that this may not be necessarily the case. A sizable amount of energy could be saved, increasing energy efficiency by leveraging compiler optimizations but at the same time impacting performance and power consumption over time. We analyze the SPEC CPU 2017 benchmark suite with 43 benchmarks from different domains, including integer and floating-point heavy computations on a state-of-the-art server system for cloud applications. Our results show that power consumption displays more stable behavior if less compiler optimizations are used and also confirmed that performance and energy efficiency are different optimizations goals. Additionally, compiler optimizations possibly could be used to enable power capping on a software level and care must be taken when selecting such optimizations.
云服务的增长导致越来越多的数据中心变得越来越大,并消耗大量的电力。为了提高能源效率,实际的服务器设备和软件都必须变得更加节能。软件对硬件的利用水平以及随后的能源效率有很大的影响。虽然能效通常被视为等同于性能,但我们认为情况未必如此。可以节省大量的能源,通过利用编译器优化来提高能源效率,但同时随着时间的推移会影响性能和功耗。我们分析了SPEC CPU 2017基准测试套件,其中包括来自不同领域的43个基准测试,包括在最先进的云应用服务器系统上的整数和浮点繁重计算。我们的结果表明,如果使用较少的编译器优化,则功耗显示出更稳定的行为,并且还证实了性能和能源效率是不同的优化目标。此外,编译器优化可能用于在软件级别启用功率上限,在选择此类优化时必须小心。
{"title":"Performance, Power, and Energy-Efficiency Impact Analysis of Compiler Optimizations on the SPEC CPU 2017 Benchmark Suite","authors":"Norbert Schmitt, James Bucek, John Beckett, Aaron Cragin, K. Lange, Samuel Kounev","doi":"10.1109/UCC48980.2020.00047","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00047","url":null,"abstract":"The growth of cloud services leads to more and more data centers that are increasingly larger and consume considerable amounts of power. To increase energy efficiency, both the actual server equipment and the software must become more energy efficient. Software has a major impact on hardware utilization levels, and subsequently, the energy efficiency. While energy efficiency is often seen as identical to performance, we argue that this may not be necessarily the case. A sizable amount of energy could be saved, increasing energy efficiency by leveraging compiler optimizations but at the same time impacting performance and power consumption over time. We analyze the SPEC CPU 2017 benchmark suite with 43 benchmarks from different domains, including integer and floating-point heavy computations on a state-of-the-art server system for cloud applications. Our results show that power consumption displays more stable behavior if less compiler optimizations are used and also confirmed that performance and energy efficiency are different optimizations goals. Additionally, compiler optimizations possibly could be used to enable power capping on a software level and care must be taken when selecting such optimizations.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117114877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00055
Ahmed Afif Monrat, O. Schelén, K. Andersson
Many countries in Europe are adopting a deregulated system where prosumers can subscribe with any energy supplier in an open market, independently of location. However, the mobility aspect of transactions in the existing system is not satisfactorily covered. For instance, if a person receives the service of charging an EV from a prosumer’s local outlet, he cannot pay to the prosumer directly without the presence of an intermediary system. This has led to a situation where the EV owners need to have a large number of subscriptions for EV charging providers and visitors cannot pay for the electricity used there. This study evaluates this mobility gap and proposes a solution for charging transactions using blockchain technology. Furthermore, we implement a proof of concept using the Hyperledger consortium platform for the technical feasibility of the proposed approach and evaluate the performance metrics such as transaction latency and throughput.
{"title":"Blockchain Mobility Solution for Charging Transactions of Electrical Vehicles","authors":"Ahmed Afif Monrat, O. Schelén, K. Andersson","doi":"10.1109/UCC48980.2020.00055","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00055","url":null,"abstract":"Many countries in Europe are adopting a deregulated system where prosumers can subscribe with any energy supplier in an open market, independently of location. However, the mobility aspect of transactions in the existing system is not satisfactorily covered. For instance, if a person receives the service of charging an EV from a prosumer’s local outlet, he cannot pay to the prosumer directly without the presence of an intermediary system. This has led to a situation where the EV owners need to have a large number of subscriptions for EV charging providers and visitors cannot pay for the electricity used there. This study evaluates this mobility gap and proposes a solution for charging transactions using blockchain technology. Furthermore, we implement a proof of concept using the Hyperledger consortium platform for the technical feasibility of the proposed approach and evaluate the performance metrics such as transaction latency and throughput.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114283634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/ucc48980.2020.00013
L. Bittencourt
Welcome to the ninth edition of the International Workshop on Cloud and Edge Computing, and Applications Management (CloudAM 2020). The maturation of the cloud computing paradigm brought all kinds of applications to be deployed, totally or partially, in the cloud. Cloud research has been recently evolving to three different views: improve applications already running in the cloud, move applications to the cloud, and also making the distributed infrastructure suitable for new types of applications with different requirements. This includes applications that run in mobile devices and also internet of things (IoT) applications, which can require lower latencies or more processing capacity closer to the edge of the network, resulting in a distributed infrastructure that complements the centralized cloud data centres.
{"title":"Message from the CloudAM 2020 Workshop Chairs","authors":"L. Bittencourt","doi":"10.1109/ucc48980.2020.00013","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00013","url":null,"abstract":"Welcome to the ninth edition of the International Workshop on Cloud and Edge Computing, and Applications Management (CloudAM 2020). The maturation of the cloud computing paradigm brought all kinds of applications to be deployed, totally or partially, in the cloud. Cloud research has been recently evolving to three different views: improve applications already running in the cloud, move applications to the cloud, and also making the distributed infrastructure suitable for new types of applications with different requirements. This includes applications that run in mobile devices and also internet of things (IoT) applications, which can require lower latencies or more processing capacity closer to the edge of the network, resulting in a distributed infrastructure that complements the centralized cloud data centres.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125790172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/ucc48980.2020.00014
X. Zhai
will provide a forum to discuss fundamental issues on research and development of real-time data processing for cloud computing as well as challenges in the design and implementation of novel real-time data processing algorithms, neural networks, architectures and systems for sensor networks, healthcare systems and Internet-of-Things (IoT). The RTDPCC-2020 provide a wonderful forum for you to refresh your knowledge base and explore the innovations in the relevant research fields. The symposium and the main conference event will strive to offer plenty of networking opportunities, including meeting and interacting with the leading scientists and researchers, and colleagues as well as and UK, China, USA, Qatar, Greece, and other We are the committee, very hard in reviewing papers and providing feedback to authors. Finally, we thank the hosting organization and the We the symposium will you a valuable opportunity to share ideas with other researchers and practitioners from institutions around the world. We the symposium complements perfectly the topical focus of UCC-2020 and provides additional breadth and depth to the main conference. Finally, we hope you enjoy the workshop and have a fruitful meeting in Leicester, UK.
{"title":"Message from the RTDPCC 2020 Workshop Chairs","authors":"X. Zhai","doi":"10.1109/ucc48980.2020.00014","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00014","url":null,"abstract":"will provide a forum to discuss fundamental issues on research and development of real-time data processing for cloud computing as well as challenges in the design and implementation of novel real-time data processing algorithms, neural networks, architectures and systems for sensor networks, healthcare systems and Internet-of-Things (IoT). The RTDPCC-2020 provide a wonderful forum for you to refresh your knowledge base and explore the innovations in the relevant research fields. The symposium and the main conference event will strive to offer plenty of networking opportunities, including meeting and interacting with the leading scientists and researchers, and colleagues as well as and UK, China, USA, Qatar, Greece, and other We are the committee, very hard in reviewing papers and providing feedback to authors. Finally, we thank the hosting organization and the We the symposium will you a valuable opportunity to share ideas with other researchers and practitioners from institutions around the world. We the symposium complements perfectly the topical focus of UCC-2020 and provides additional breadth and depth to the main conference. Finally, we hope you enjoy the workshop and have a fruitful meeting in Leicester, UK.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00053
Josef Spillner, Panagiotis Gkikopoulos, Alina Buzachis, M. Villari
Where shall my new shiny application run? Hundreds of such questions are asked by software engineers who have many cloud services at their disposition, but increasingly also many other hosting options around managed edge devices and fog spectrums, including for functions and container hosting (FaaS/CaaS). Especially for composite applications prevalent in this field, the combinatorial deployment space is exploding. We claim that a systematic and automated approach is unavoidable in order to scale functional decomposition applications further so that each hosting facility is fully exploited. To support engineers while they transition from cloud-native to continuum-native, we provide a rule-based matchmaker called RBMM that combines several decision factors typically present in software description formats and applies rules to them. Using the MaestroNG orchestrator and OsmoticToolkit, we also contribute an integration of the matchmaker into an actual deployment environment.
{"title":"Rule-Based Resource Matchmaking for Composite Application Deployments across IoT-Fog-Cloud Continuums","authors":"Josef Spillner, Panagiotis Gkikopoulos, Alina Buzachis, M. Villari","doi":"10.1109/UCC48980.2020.00053","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00053","url":null,"abstract":"Where shall my new shiny application run? Hundreds of such questions are asked by software engineers who have many cloud services at their disposition, but increasingly also many other hosting options around managed edge devices and fog spectrums, including for functions and container hosting (FaaS/CaaS). Especially for composite applications prevalent in this field, the combinatorial deployment space is exploding. We claim that a systematic and automated approach is unavoidable in order to scale functional decomposition applications further so that each hosting facility is fully exploited. To support engineers while they transition from cloud-native to continuum-native, we provide a rule-based matchmaker called RBMM that combines several decision factors typically present in software description formats and applies rules to them. Using the MaestroNG orchestrator and OsmoticToolkit, we also contribute an integration of the matchmaker into an actual deployment environment.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131346760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00031
Peng Kang, P. Lama
Large-scale web services are increasingly being built with many small modular components (microservices), which can be deployed, updated and scaled seamlessly. These microservices are packaged to run in a lightweight isolated execution environment (containers) and deployed on computing resources rented from cloud providers. However, the complex interactions and the contention of shared hardware resources in cloud data centers pose significant challenges in managing web service performance. In this paper, we present RScale, a robust resource scaling system that provides end-to-end performance guarantee for containerized microservices deployed in the cloud. RScale employs a probabilistic machine learning-based performance model, which can quickly adapt to changing system dynamics and directly provide confidence bounds in the predictions with minimal overhead. It leverages multi-layered data collected from container-level resource usage metrics and virtual machine-level hardware performance counter metrics to capture changing resource demands in the presence of multi-tenant performance interference. We implemented and evaluated RScale on NSF Cloud's Chameleon testbed using KVM for virtualization, Docker Engine for containerization and Kubernetes for container orchestration. Experimental results with an open-source microservices benchmark, Robot Shop, demonstrate the superior prediction accuracy and adaptiveness of our modeling approach compared to popular machine learning techniques. RScale meets the performance SLO (service-level-objective) targets for various microservice workflows even in the presence of multi-tenant performance interference and changing system dynamics.
{"title":"Robust Resource Scaling of Containerized Microservices with Probabilistic Machine learning","authors":"Peng Kang, P. Lama","doi":"10.1109/UCC48980.2020.00031","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00031","url":null,"abstract":"Large-scale web services are increasingly being built with many small modular components (microservices), which can be deployed, updated and scaled seamlessly. These microservices are packaged to run in a lightweight isolated execution environment (containers) and deployed on computing resources rented from cloud providers. However, the complex interactions and the contention of shared hardware resources in cloud data centers pose significant challenges in managing web service performance. In this paper, we present RScale, a robust resource scaling system that provides end-to-end performance guarantee for containerized microservices deployed in the cloud. RScale employs a probabilistic machine learning-based performance model, which can quickly adapt to changing system dynamics and directly provide confidence bounds in the predictions with minimal overhead. It leverages multi-layered data collected from container-level resource usage metrics and virtual machine-level hardware performance counter metrics to capture changing resource demands in the presence of multi-tenant performance interference. We implemented and evaluated RScale on NSF Cloud's Chameleon testbed using KVM for virtualization, Docker Engine for containerization and Kubernetes for container orchestration. Experimental results with an open-source microservices benchmark, Robot Shop, demonstrate the superior prediction accuracy and adaptiveness of our modeling approach compared to popular machine learning techniques. RScale meets the performance SLO (service-level-objective) targets for various microservice workflows even in the presence of multi-tenant performance interference and changing system dynamics.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124697785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}