Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00041
Li Wu, Johan Tordsson, Alexander Acker, O. Kao
Microservices represent a popular paradigm to construct large-scale applications in many domains thanks to benefits such as scalability, flexibility, and agility. However, it is difficult to manage and operate a microservice system due to its high dynamics and complexity. In particular, the frequent updates of microservices lead to the absence of historical failure data, where the current automatic recovery methods fail short. In this paper, we propose an automatic recovery method named MicroRAS, which requires no historical failure data, to mitigate performance issues in microservice systems. MicroRAS is a model-driven method that selects the appropriate recovery action with a trade-off between the effectiveness and recovery time of actions. It estimates the effectiveness of an action in terms of its effects of recovering the pinpointed faulty service and its effects of interfering with other services. The estimation of action effects is based on a system-state model represented by an attributed graph that tracks the propagation of effects. For the experimental evaluation, several types of anomalies are injected into a microservice system based on Kubernetes, which also serves a real-world workload. The corresponding benchmarks show that the actions selected by MicroRAS can recover the faulty services by 94.7%, and reduce the interference to other services by at least 44.3% compared to baseline methods.
{"title":"MicroRAS: Automatic Recovery in the Absence of Historical Failure Data for Microservice Systems","authors":"Li Wu, Johan Tordsson, Alexander Acker, O. Kao","doi":"10.1109/UCC48980.2020.00041","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00041","url":null,"abstract":"Microservices represent a popular paradigm to construct large-scale applications in many domains thanks to benefits such as scalability, flexibility, and agility. However, it is difficult to manage and operate a microservice system due to its high dynamics and complexity. In particular, the frequent updates of microservices lead to the absence of historical failure data, where the current automatic recovery methods fail short. In this paper, we propose an automatic recovery method named MicroRAS, which requires no historical failure data, to mitigate performance issues in microservice systems. MicroRAS is a model-driven method that selects the appropriate recovery action with a trade-off between the effectiveness and recovery time of actions. It estimates the effectiveness of an action in terms of its effects of recovering the pinpointed faulty service and its effects of interfering with other services. The estimation of action effects is based on a system-state model represented by an attributed graph that tracks the propagation of effects. For the experimental evaluation, several types of anomalies are injected into a microservice system based on Kubernetes, which also serves a real-world workload. The corresponding benchmarks show that the actions selected by MicroRAS can recover the faulty services by 94.7%, and reduce the interference to other services by at least 44.3% compared to baseline methods.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00064
Yu Cheng, Yangzhe Liao, X. Zhai
Unmanned aerial vehicles (UAVs) have been gained significant attention from mobile network operators (MNOs) to provision low-latency wireless big data applications, where a number of ground resource-limited user equipments (UEs) can be served by UAVs equipped with powerful computing resources, in comparison with UEs. In this paper, a novel UAV-empowered mobile edge computing (MEC) network architecture is considered. An energy consumption and task execution delay minimization multi-objective optimization problem is formulated, subject to numerous QoS constraints. A heuristic algorithm is proposed to solve the challenging optimization problem, which consists of the task assignment, differential evolution (DE)-aided and non-dominated sort steps. The selected key performance of the proposed algorithm is given and compared with the existing advanced particle swarm optimization (PSO) and non-dominated sorting genetic algorithm II (NSGA-II). The results show that the proposed heuristic algorithm promises higher energy efficiency than PSO and NSGA-II under the same task execution time cost.
{"title":"Energy-efficient Resource Allocation for UAV-empowered Mobile Edge Computing System","authors":"Yu Cheng, Yangzhe Liao, X. Zhai","doi":"10.1109/UCC48980.2020.00064","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00064","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have been gained significant attention from mobile network operators (MNOs) to provision low-latency wireless big data applications, where a number of ground resource-limited user equipments (UEs) can be served by UAVs equipped with powerful computing resources, in comparison with UEs. In this paper, a novel UAV-empowered mobile edge computing (MEC) network architecture is considered. An energy consumption and task execution delay minimization multi-objective optimization problem is formulated, subject to numerous QoS constraints. A heuristic algorithm is proposed to solve the challenging optimization problem, which consists of the task assignment, differential evolution (DE)-aided and non-dominated sort steps. The selected key performance of the proposed algorithm is given and compared with the existing advanced particle swarm optimization (PSO) and non-dominated sorting genetic algorithm II (NSGA-II). The results show that the proposed heuristic algorithm promises higher energy efficiency than PSO and NSGA-II under the same task execution time cost.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"83 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120816134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00030
Marcelo C. Araújo, B. Sousa, M. Curado, L. Bittencourt
The popularization of mobile devices has led to the emergence of new demands that the centralized infrastructure of the Cloud has not been able to meet. In this scenario Fog Computing emerges, which migrates part of the computational resources to the edge and offers low latency access to devices connected to the network. Nowadays, many applications have a high level of interactivity and are highly sensitive to latency, thus requiring strategies that allow data migration to follow users' mobility and ensure the QoS (Quality of Service) requirements. In this context, CMFog (Content Migration Fog) is proposed, a proactive migration strategy for virtual machines in the Fog that uses the MADM (Multiple Attribute Decision Making) approach to decide when and where the virtual machine should be migrated. The Markov Chain method is used to predict mobility and to allow migration decisions to be made proactively. The achieved results with CMFog demonstrate a reduction up to 50% in the average latency when compared with the reactive approach used as baseline.
{"title":"CMFog: Proactive Content Migration Using Markov Chain and MADM in Fog Computing","authors":"Marcelo C. Araújo, B. Sousa, M. Curado, L. Bittencourt","doi":"10.1109/UCC48980.2020.00030","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00030","url":null,"abstract":"The popularization of mobile devices has led to the emergence of new demands that the centralized infrastructure of the Cloud has not been able to meet. In this scenario Fog Computing emerges, which migrates part of the computational resources to the edge and offers low latency access to devices connected to the network. Nowadays, many applications have a high level of interactivity and are highly sensitive to latency, thus requiring strategies that allow data migration to follow users' mobility and ensure the QoS (Quality of Service) requirements. In this context, CMFog (Content Migration Fog) is proposed, a proactive migration strategy for virtual machines in the Fog that uses the MADM (Multiple Attribute Decision Making) approach to decide when and where the virtual machine should be migrated. The Markov Chain method is used to predict mobility and to allow migration decisions to be made proactively. The achieved results with CMFog demonstrate a reduction up to 50% in the average latency when compared with the reactive approach used as baseline.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00039
Lilia Sampaio, Armstrong Goes, Maxwell Albuquerque, Diego Gama, Jose Ignacio Schmid, Andrey Brito
In this paper, we propose a QoS-aware Single Input Multiple Output (SIMO) controller that combines performance and cost goals while aiming to maintain system stability. To enhance robustness, as targeted by inspiring control concepts, we use system identification models and analytical tuning techniques for Proportional-Integral-Derivative (PID) controllers. Our resulting SIMO PI controller performs well when tracking reference values that may change over time and when conciliating conflicting goals according to the user’s preference. In contrast, a naïve use of independent controllers may lead to opposing decisions and instabilities, as the controllers work against each other. We examine the use of the controller to orchestrate processing pods in a Kubernetes cluster for an IoT sensor analysis application (power consumption disaggregation). Nevertheless, the lessons learned in the design of the controller apply to other use cases, including batch and interactive workloads.
{"title":"Single-Input Multiple-Output Control for Multi-Goal Orchestration","authors":"Lilia Sampaio, Armstrong Goes, Maxwell Albuquerque, Diego Gama, Jose Ignacio Schmid, Andrey Brito","doi":"10.1109/UCC48980.2020.00039","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00039","url":null,"abstract":"In this paper, we propose a QoS-aware Single Input Multiple Output (SIMO) controller that combines performance and cost goals while aiming to maintain system stability. To enhance robustness, as targeted by inspiring control concepts, we use system identification models and analytical tuning techniques for Proportional-Integral-Derivative (PID) controllers. Our resulting SIMO PI controller performs well when tracking reference values that may change over time and when conciliating conflicting goals according to the user’s preference. In contrast, a naïve use of independent controllers may lead to opposing decisions and instabilities, as the controllers work against each other. We examine the use of the controller to orchestrate processing pods in a Kubernetes cluster for an IoT sensor analysis application (power consumption disaggregation). Nevertheless, the lessons learned in the design of the controller apply to other use cases, including batch and interactive workloads.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127584375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00034
Kohei Ueki, Kenichi Kourai
Clouds often provides a mechanism called autoscaling to deal with load increases of services running in virtual machines (VMs). When a VM is overloaded, scale-out is performed and automatically increases the number of VMs. However, when multiple services run in one VM, the entire VM is always scaled out even if only one service is over-utilized. In this case, only an over-utilized service should be scaled out, but it is not easy for clouds to accurately monitor the resource usage of services inside VMs. This paper proposes Ciel, which runs each service in a container created inside a VM for separation of services and enables fine-grained autoscaling of VMs. Using VM introspection, Ciel accurately monitors the resource usage of each in-VM container from the outside of a VM in a non-intrusive manner. If it detects an overloaded in-VM container, it creates a new VM of minimum size and boots only the container that needs to be scaled out in the VM. This can minimize both the cost of the VM and the time taken for scale-out. We have implemented Ciel using Xen and Docker and showed the effectiveness.
{"title":"Fine-grained Autoscaling with In-VM Containers and VM Introspection","authors":"Kohei Ueki, Kenichi Kourai","doi":"10.1109/UCC48980.2020.00034","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00034","url":null,"abstract":"Clouds often provides a mechanism called autoscaling to deal with load increases of services running in virtual machines (VMs). When a VM is overloaded, scale-out is performed and automatically increases the number of VMs. However, when multiple services run in one VM, the entire VM is always scaled out even if only one service is over-utilized. In this case, only an over-utilized service should be scaled out, but it is not easy for clouds to accurately monitor the resource usage of services inside VMs. This paper proposes Ciel, which runs each service in a container created inside a VM for separation of services and enables fine-grained autoscaling of VMs. Using VM introspection, Ciel accurately monitors the resource usage of each in-VM container from the outside of a VM in a non-intrusive manner. If it detects an overloaded in-VM container, it creates a new VM of minimum size and boots only the container that needs to be scaled out in the VM. This can minimize both the cost of the VM and the time taken for scale-out. We have implemented Ciel using Xen and Docker and showed the effectiveness.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130808340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00054
Jasmin Bogatinovski, S. Nedelkoski, Jorge Cardoso, O. Kao
Artificial Intelligence for IT Operations (AIOps) combines big data and machine learning to replace a broad range of IT Operations tasks including reliability and performance monitoring of services. By exploiting observability data, AIOps enable detection of faults and issues of services. The focus of this work is on detecting anomalies based on distributed tracing records that contain detailed information of the services of the distributed system. Timely and accurately detecting trace anomalies is very challenging due to the large number of underlying microservices and the complex call relationships between them. We addresses the problem anomaly detection from distributed traces with a novel self-supervised method and a new learning task formulation. The method is able to have high performance even in large traces and capture complex interactions between the services. The evaluation shows that the approach achieves high accuracy and solid performance in the experimental testbed.
{"title":"Self-Supervised Anomaly Detection from Distributed Traces","authors":"Jasmin Bogatinovski, S. Nedelkoski, Jorge Cardoso, O. Kao","doi":"10.1109/UCC48980.2020.00054","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00054","url":null,"abstract":"Artificial Intelligence for IT Operations (AIOps) combines big data and machine learning to replace a broad range of IT Operations tasks including reliability and performance monitoring of services. By exploiting observability data, AIOps enable detection of faults and issues of services. The focus of this work is on detecting anomalies based on distributed tracing records that contain detailed information of the services of the distributed system. Timely and accurately detecting trace anomalies is very challenging due to the large number of underlying microservices and the complex call relationships between them. We addresses the problem anomaly detection from distributed traces with a novel self-supervised method and a new learning task formulation. The method is able to have high performance even in large traces and capture complex interactions between the services. The evaluation shows that the approach achieves high accuracy and solid performance in the experimental testbed.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117293597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00049
Mohammed Bahja, R. Hammad, M. Kuhail
This In order to analyze the people reactions and opinions about Coronavirus (COVID-19), there is a need for computational framework, which leverages machine learning (ML) and natural language processing (NLP) techniques to identify COVID tweets and further categorize these in to disease specific feelings to address societal concerns related to Safety, Worriedness, and Irony of COVID. This is an ongoing study, and the purpose of this paper is to demonstrate the initial results of determining the relevancy of the tweets and what Arabic speaking people were tweeting about the three disease related feelings/emotions about COVID: Safety, Worry, and Irony. A combination of ML and NLP techniques are used for determining what Arabic speaking people are tweeting about COVID. A two-stage classifier system was built to find relevant tweets about COVID, and then the tweets were categorized into three categories. Results indicated that the number of tweets by males and females were similar. The classification performance was high for relevancy (F=0.85), categorization (F=0.79). Our study has demonstrated how categories of discussion on Twitter about an epidemic can be discovered so that officials can understand specific societal concerns related to the emotions and feelings related to the epidemic.
{"title":"Capturing Public Concerns About Coronavirus Using Arabic Tweets: An NLP-Driven Approach","authors":"Mohammed Bahja, R. Hammad, M. Kuhail","doi":"10.1109/UCC48980.2020.00049","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00049","url":null,"abstract":"This In order to analyze the people reactions and opinions about Coronavirus (COVID-19), there is a need for computational framework, which leverages machine learning (ML) and natural language processing (NLP) techniques to identify COVID tweets and further categorize these in to disease specific feelings to address societal concerns related to Safety, Worriedness, and Irony of COVID. This is an ongoing study, and the purpose of this paper is to demonstrate the initial results of determining the relevancy of the tweets and what Arabic speaking people were tweeting about the three disease related feelings/emotions about COVID: Safety, Worry, and Irony. A combination of ML and NLP techniques are used for determining what Arabic speaking people are tweeting about COVID. A two-stage classifier system was built to find relevant tweets about COVID, and then the tweets were categorized into three categories. Results indicated that the number of tweets by males and females were similar. The classification performance was high for relevancy (F=0.85), categorization (F=0.79). Our study has demonstrated how categories of discussion on Twitter about an epidemic can be discovered so that officials can understand specific societal concerns related to the emotions and feelings related to the epidemic.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129972119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00032
Carlos H.Z. Nicodemus, Cristina Boeres, Vinod E. F. Rebello
The adoption of container technology to deploy a diverse variety of applications in clusters, cloud data centers, and even cloudlets at the edge has steadily increased. Efficient resource utilization and throughput maximization are just two important objectives for service providers trying to reduce operating costs. While containers consume CPU, memory, and I/O resources elastically, orchestration frameworks must still allocate containers according to resource availability and limit the amount of resources that each can use to avoid interference. While the practice of reserving the maximum amount of required memory for the entire execution of a container is prevalent, this paper investigates the benefits of managing container memory allocations dynamically. By frequently adjusting the amount of memory reserved for each container during execution, this autonomous approach aims to increase the average number of containers that can be hosted on a server. Results show that through careful adjustments of container limits, manipulation of pages between memory and swap, and container preemption, improvements in memory utilization, cloud costs, and job throughput can be achieved without prejudicing container performance.
{"title":"Managing Vertical Memory Elasticity in Containers","authors":"Carlos H.Z. Nicodemus, Cristina Boeres, Vinod E. F. Rebello","doi":"10.1109/UCC48980.2020.00032","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00032","url":null,"abstract":"The adoption of container technology to deploy a diverse variety of applications in clusters, cloud data centers, and even cloudlets at the edge has steadily increased. Efficient resource utilization and throughput maximization are just two important objectives for service providers trying to reduce operating costs. While containers consume CPU, memory, and I/O resources elastically, orchestration frameworks must still allocate containers according to resource availability and limit the amount of resources that each can use to avoid interference. While the practice of reserving the maximum amount of required memory for the entire execution of a container is prevalent, this paper investigates the benefits of managing container memory allocations dynamically. By frequently adjusting the amount of memory reserved for each container during execution, this autonomous approach aims to increase the average number of containers that can be hosted on a server. Results show that through careful adjustments of container limits, manipulation of pages between memory and swap, and container preemption, improvements in memory utilization, cloud costs, and job throughput can be achieved without prejudicing container performance.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131983421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/ucc48980.2020.00012
Sensors and actuators are becoming pervasive. From empowering smart agri-tech, to cities, to even our own households, sensors and IoT are revolutionizing all dimensions of computing. With advancements in low energy communication standards, low energy computing from message encodings, to on-the-fly encryption, etc. we are seeing emergence of new paradigms such as Fog, Serverless and Continuum computing which are empowered by high capacity core networks and large data centers thrown in the mix. Such a scheme of things creates new opportunities but also are rife with challenges which must be overcome. This workshop aims to discuss recent advances around holistic security, deployment modes, communication mediums, line protocols, data collection, and multi-level processing and application development in such systems.
{"title":"Message from the CIFS 2020 Workshop Chairs","authors":"","doi":"10.1109/ucc48980.2020.00012","DOIUrl":"https://doi.org/10.1109/ucc48980.2020.00012","url":null,"abstract":"Sensors and actuators are becoming pervasive. From empowering smart agri-tech, to cities, to even our own households, sensors and IoT are revolutionizing all dimensions of computing. With advancements in low energy communication standards, low energy computing from message encodings, to on-the-fly encryption, etc. we are seeing emergence of new paradigms such as Fog, Serverless and Continuum computing which are empowered by high capacity core networks and large data centers thrown in the mix. Such a scheme of things creates new opportunities but also are rife with challenges which must be overcome. This workshop aims to discuss recent advances around holistic security, deployment modes, communication mediums, line protocols, data collection, and multi-level processing and application development in such systems.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134389087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00022
Xiaocui Sun, Zhijun Wang, Yunxiang Wu, Hao Che, Hong Jiang
To date, customers using infrastructure-as-a service (IaaS) cloud services are charged for the usage of computing/storage resources, but not the network resource. The difficulty lies in the fact that it is nontrivial to allocate network resource to individual customers effectively, especially for short-lived flows, in terms of both performance and cost. To tackle this challenge, in this paper, we propose PACCP, an end-to-end Price-Aware Congestion Control Protocol for cloud services. PACCP is a network utility maximization (NUM) based optimal congestion control protocol. It supports three different classes of services (CoSes), i.e., best effort service (BE), differentiated service (DS), and minimum rate guaranteed (MRG) service. In PACCP, the desired CoS or rate allocation for a given flow is enabled by properly setting a pair of control parameters, i.e., a minimum guaranteed rate and a utility weight, which in turn, determines the price paid by the user of the flow. Two pricing models, i.e., a coarse-grained Virtual machine (VM)-Based Pricing model (VBP) and a fine-grained Flow-Based Pricing model (FBP), are proposed. PACCP is evaluated by both large scale simulation and small testbed implementation. The results demonstrate that PACCP provides minimum rate guarantee, high bandwidth utilization and fair rate allocation, commensurate with the pricing models.
{"title":"PACCP: A Price-Aware Congestion Control Protocol for Datacenters","authors":"Xiaocui Sun, Zhijun Wang, Yunxiang Wu, Hao Che, Hong Jiang","doi":"10.1109/UCC48980.2020.00022","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00022","url":null,"abstract":"To date, customers using infrastructure-as-a service (IaaS) cloud services are charged for the usage of computing/storage resources, but not the network resource. The difficulty lies in the fact that it is nontrivial to allocate network resource to individual customers effectively, especially for short-lived flows, in terms of both performance and cost. To tackle this challenge, in this paper, we propose PACCP, an end-to-end Price-Aware Congestion Control Protocol for cloud services. PACCP is a network utility maximization (NUM) based optimal congestion control protocol. It supports three different classes of services (CoSes), i.e., best effort service (BE), differentiated service (DS), and minimum rate guaranteed (MRG) service. In PACCP, the desired CoS or rate allocation for a given flow is enabled by properly setting a pair of control parameters, i.e., a minimum guaranteed rate and a utility weight, which in turn, determines the price paid by the user of the flow. Two pricing models, i.e., a coarse-grained Virtual machine (VM)-Based Pricing model (VBP) and a fine-grained Flow-Based Pricing model (FBP), are proposed. PACCP is evaluated by both large scale simulation and small testbed implementation. The results demonstrate that PACCP provides minimum rate guarantee, high bandwidth utilization and fair rate allocation, commensurate with the pricing models.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123222329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}