Migrating business services to the clouds creates many high business risks such as "cloud vendor lock-in". One approach for preventing this risk is to deploy business services on different clouds as SaaS (i.e., Software as a Service) services. Unfortunately, such SaaS multi-cloud deployment approach faces many technical obstacles such as clouds heterogeneity and ensuring data consistency across different clouds. Cloud heterogeneity could be easily resolved using service adapters, but ensuring data consistency remains a major obstacle, as existing approaches offer a trade-off between correctness and performance. Hence, SaaS providers opt to choose one or more of these approaches at design time, then create their services based on the limitations of the chosen approaches. This approach limits the agility and evolution of business services, as it tightly couples them to the chosen data consistency approaches. To overcome such problem, this paper proposes SULTAN, a composite data consistency approach for SaaS multi-cloud deployment. It enables SaaS providers to dynamically define different data consistency requirements for the same SaaS service at run-time. SULTAN decouples the SaaS services from the cloud data stores, enabling services to adapt and migrate freely among clouds without any SaaS code modifications.
{"title":"SULTAN: A Composite Data Consistency Approach for SaaS Multi-cloud Deployment","authors":"Islam Elgedawy","doi":"10.1109/UCC.2015.28","DOIUrl":"https://doi.org/10.1109/UCC.2015.28","url":null,"abstract":"Migrating business services to the clouds creates many high business risks such as \"cloud vendor lock-in\". One approach for preventing this risk is to deploy business services on different clouds as SaaS (i.e., Software as a Service) services. Unfortunately, such SaaS multi-cloud deployment approach faces many technical obstacles such as clouds heterogeneity and ensuring data consistency across different clouds. Cloud heterogeneity could be easily resolved using service adapters, but ensuring data consistency remains a major obstacle, as existing approaches offer a trade-off between correctness and performance. Hence, SaaS providers opt to choose one or more of these approaches at design time, then create their services based on the limitations of the chosen approaches. This approach limits the agility and evolution of business services, as it tightly couples them to the chosen data consistency approaches. To overcome such problem, this paper proposes SULTAN, a composite data consistency approach for SaaS multi-cloud deployment. It enables SaaS providers to dynamically define different data consistency requirements for the same SaaS service at run-time. SULTAN decouples the SaaS services from the cloud data stores, enabling services to adapt and migrate freely among clouds without any SaaS code modifications.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114863362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Al-Smadi, M. Al-Ayyoub, Huda Al-Sarhan, Y. Jararweh
The rapid increase in digital information has raised great challenges especially when it comes to automated content analysis. The adoption of social media as a communication channel for political views demands automated methods for posts' tone analysis, sentiment analysis, and emotional affect. This paper proposes a novel approach of using aspect-based sentiment analysis in evaluating Arabic news posts affect on readers. The approach adopts several phases of text processing, features selection, and text classification. Two widely used classifiers, namely Conditional Random Fields (CRF) and J48, are tested. Experimentation results show that J48 outperforms CRF in aspect terms extraction whereas CRF is slightly better in aspect terms polarity identification.
{"title":"Using Aspect-Based Sentiment Analysis to Evaluate Arabic News Affect on Readers","authors":"Mohammad Al-Smadi, M. Al-Ayyoub, Huda Al-Sarhan, Y. Jararweh","doi":"10.1109/UCC.2015.78","DOIUrl":"https://doi.org/10.1109/UCC.2015.78","url":null,"abstract":"The rapid increase in digital information has raised great challenges especially when it comes to automated content analysis. The adoption of social media as a communication channel for political views demands automated methods for posts' tone analysis, sentiment analysis, and emotional affect. This paper proposes a novel approach of using aspect-based sentiment analysis in evaluating Arabic news posts affect on readers. The approach adopts several phases of text processing, features selection, and text classification. Two widely used classifiers, namely Conditional Random Fields (CRF) and J48, are tested. Experimentation results show that J48 outperforms CRF in aspect terms extraction whereas CRF is slightly better in aspect terms polarity identification.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133863178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demo paper presents the FELIX project approach of implementation the "High Quality Media Transmission over long-distance networks" use case. A virtual slice built on demand over European and Japan infrastructure allows to perform the experiments and shows capabilities of the test-bed and its availability for high quality media streaming experiments over a long distance federated network. It is also the first time when the FELIX Control Framework is used for provisioning the SDN resources for the experiments. Two experiments for use case implementation and validation in the FELIX test-bed are proposed and described.
{"title":"High Quality Media Streaming over Longdistance Network Using FELIX Experimental Facility","authors":"L. Ogrodowczyk, B. Belter, Szymon Malewski","doi":"10.1109/UCC.2015.69","DOIUrl":"https://doi.org/10.1109/UCC.2015.69","url":null,"abstract":"This demo paper presents the FELIX project approach of implementation the \"High Quality Media Transmission over long-distance networks\" use case. A virtual slice built on demand over European and Japan infrastructure allows to perform the experiments and shows capabilities of the test-bed and its availability for high quality media streaming experiments over a long distance federated network. It is also the first time when the FELIX Control Framework is used for provisioning the SDN resources for the experiments. Two experiments for use case implementation and validation in the FELIX test-bed are proposed and described.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133913956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandro Brunner, Martin Blöchlinger, G. T. Carughi, Josef Spillner, T. Bohnert
Cloud-Native Applications (CNA) are designed to run on top of cloud computing infrastructure services with inherent support for self-management, scalability and resilience across clustered units of application logic. Their systematic design is promising especially for recent hybrid virtual machine and container environments for which no dominant application development model exists. In this paper, we present a case study on a business application running as CNA and demonstrate the advantages of the design experimentally. We also present Dynamite, an application auto-scaler designed for containerised CNA. Our experiments on a Vagrant host, on a private OpenStack installation and on a public Amazon EC2 testbed show that CNA require little additional engineering.
{"title":"Experimental Evaluation of the Cloud-Native Application Design","authors":"Sandro Brunner, Martin Blöchlinger, G. T. Carughi, Josef Spillner, T. Bohnert","doi":"10.1109/UCC.2015.87","DOIUrl":"https://doi.org/10.1109/UCC.2015.87","url":null,"abstract":"Cloud-Native Applications (CNA) are designed to run on top of cloud computing infrastructure services with inherent support for self-management, scalability and resilience across clustered units of application logic. Their systematic design is promising especially for recent hybrid virtual machine and container environments for which no dominant application development model exists. In this paper, we present a case study on a business application running as CNA and demonstrate the advantages of the design experimentally. We also present Dynamite, an application auto-scaler designed for containerised CNA. Our experiments on a Vagrant host, on a private OpenStack installation and on a public Amazon EC2 testbed show that CNA require little additional engineering.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127823873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Overlay-based network virtualization has been getting attention to realize multi-tenant datacenters. But the multi-tenant property makes broadcast/multicast communications on virtual networks difficult. In this paper, we propose TE-Cast (Topology Embedded xCast) that supports generic broadcast/multicast communications on virtual networks. Unlike existing similar methods, TE-Cast can reduce traffic amount in substrate networks without IP multicast support, and realize a stateless delivery of BUM packets among virtual switches. In practice, OpenFlow-enabled virtual switches are logically grouped in advance and a representative switch is elected from each group. OpenFlow controllers tell the switches network topology to embed the information into encapsulated BUM packets, and virtual switches deliver BUM traffic based on this information. We evaluated network delay of BUM packet delivery and traffic amount of each link. The results showed that the proposed method reduced up to 62% of packets in upstream links, up to 43% of packets in host-side links. Finally, we demonstrated VRRP-based failover on virtual networks.
{"title":"TE-Cast: Supporting General Broadcast/Multicast Communications in Virtual Networks","authors":"Keisuke Matsuo, Ryota Kawashima, H. Matsuo","doi":"10.1109/UCC.2015.83","DOIUrl":"https://doi.org/10.1109/UCC.2015.83","url":null,"abstract":"Overlay-based network virtualization has been getting attention to realize multi-tenant datacenters. But the multi-tenant property makes broadcast/multicast communications on virtual networks difficult. In this paper, we propose TE-Cast (Topology Embedded xCast) that supports generic broadcast/multicast communications on virtual networks. Unlike existing similar methods, TE-Cast can reduce traffic amount in substrate networks without IP multicast support, and realize a stateless delivery of BUM packets among virtual switches. In practice, OpenFlow-enabled virtual switches are logically grouped in advance and a representative switch is elected from each group. OpenFlow controllers tell the switches network topology to embed the information into encapsulated BUM packets, and virtual switches deliver BUM traffic based on this information. We evaluated network delay of BUM packet delivery and traffic amount of each link. The results showed that the proposed method reduced up to 62% of packets in upstream links, up to 43% of packets in host-side links. Finally, we demonstrated VRRP-based failover on virtual networks.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"2005 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128298842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Shi, Yang Wang, J. Corriveau, Boqiang Niu, W. Croft, Mengfei Peng
In the context of Hadoop, recent studies show that the shuffle operation accounts for as much as a third of the completion time of a MapReduce job. Consequently, the shuffle phase constitutes a crucial aspect of the scheduling of such jobs. During a shuffle phase, the job scheduler assigns reduce tasks to a set of reduce nodes. This may require multiple intermediate data items which share a key to be relocated to this new set of reduce nodes. In turn, this could cause a large volume of simultaneous data relocations within the network. Intuitively, a reduce task experiences shorter access latency if its required items are available locally or in close proximity. This, however, may also result in a hotspot in the network due to imbalanced traffic, as well as the imbalance of the workload on different nodes, regardless of their homogeneity. In this paper, we study data relocation incurred during the shuffle stage in the MapReduce framework. Within an arbitrary network, we aim at a) minimizing the overall network traffic, b) achieving workload balancing, and c) eliminating network hotspots, in order to improve the overall performance. Our contribution consists of the development of a scheduler that satisfies these three goals. We then present an in-depth simulation. Our results show that, for arbitrary network topologies, our Smart Shuffling Scheduler systematically outperforms the CoGRS scheduler in terms of hotspot elimination as well as reduce task load balancing, while ensuring traffic caused by data relocation is low. Not only does our algorithm handle any topology but also its benefits are inversely proportional to the inter-node connectivity of the network topology: the lower this connectivity, the better our algorithm. In particular, for the tree topology commonly used within data centres, our proposed scheduler offers significant improvements over the CoGRS scheduler.
{"title":"Smart Shuffling in MapReduce: A Solution to Balance Network Traffic and Workloads","authors":"W. Shi, Yang Wang, J. Corriveau, Boqiang Niu, W. Croft, Mengfei Peng","doi":"10.1109/UCC.2015.18","DOIUrl":"https://doi.org/10.1109/UCC.2015.18","url":null,"abstract":"In the context of Hadoop, recent studies show that the shuffle operation accounts for as much as a third of the completion time of a MapReduce job. Consequently, the shuffle phase constitutes a crucial aspect of the scheduling of such jobs. During a shuffle phase, the job scheduler assigns reduce tasks to a set of reduce nodes. This may require multiple intermediate data items which share a key to be relocated to this new set of reduce nodes. In turn, this could cause a large volume of simultaneous data relocations within the network. Intuitively, a reduce task experiences shorter access latency if its required items are available locally or in close proximity. This, however, may also result in a hotspot in the network due to imbalanced traffic, as well as the imbalance of the workload on different nodes, regardless of their homogeneity. In this paper, we study data relocation incurred during the shuffle stage in the MapReduce framework. Within an arbitrary network, we aim at a) minimizing the overall network traffic, b) achieving workload balancing, and c) eliminating network hotspots, in order to improve the overall performance. Our contribution consists of the development of a scheduler that satisfies these three goals. We then present an in-depth simulation. Our results show that, for arbitrary network topologies, our Smart Shuffling Scheduler systematically outperforms the CoGRS scheduler in terms of hotspot elimination as well as reduce task load balancing, while ensuring traffic caused by data relocation is low. Not only does our algorithm handle any topology but also its benefits are inversely proportional to the inter-node connectivity of the network topology: the lower this connectivity, the better our algorithm. In particular, for the tree topology commonly used within data centres, our proposed scheduler offers significant improvements over the CoGRS scheduler.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-Defined Networking (SDN) has evolved as a new networking paradigm to solve many of current obstacles and limitations in communication networks. While initially intended mainly for single-domain networks, SDN technology is going to be deployed also to large cloud-based data centers where several customers, called tenants, share network resources. In a multi-tenant environment, the SDN technology allows the customers to have higher level of control over the available network resources. However, as the underlying network elements and control logic are shared between multiple tenants, the isolation between tenant domains becomes an important factor in the design of all multi-tenant solutions. In this paper, we propose a scalable system architecture based on OpenFlow and packet rewriting that provides isolation and controlled sharing between tenants while enabling them to have control over their assigned resources. The architecture addresses different facets of isolation in a multi-tenant network including traffic, address space, and control isolation. Our solution improves on previous ones by putting special emphasis on inter-tenant communication, e.g. on subcontractor relations in cloud services. The evaluation of the prototype indicates that our solution puts only a small performance overhead on forwarding in a shared network.
{"title":"Domain Isolation in a Multi-tenant Software-Defined Network","authors":"Alireza Ranjbar, M. Antikainen, T. Aura","doi":"10.1109/UCC.2015.16","DOIUrl":"https://doi.org/10.1109/UCC.2015.16","url":null,"abstract":"Software-Defined Networking (SDN) has evolved as a new networking paradigm to solve many of current obstacles and limitations in communication networks. While initially intended mainly for single-domain networks, SDN technology is going to be deployed also to large cloud-based data centers where several customers, called tenants, share network resources. In a multi-tenant environment, the SDN technology allows the customers to have higher level of control over the available network resources. However, as the underlying network elements and control logic are shared between multiple tenants, the isolation between tenant domains becomes an important factor in the design of all multi-tenant solutions. In this paper, we propose a scalable system architecture based on OpenFlow and packet rewriting that provides isolation and controlled sharing between tenants while enabling them to have control over their assigned resources. The architecture addresses different facets of isolation in a multi-tenant network including traffic, address space, and control isolation. Our solution improves on previous ones by putting special emphasis on inter-tenant communication, e.g. on subcontractor relations in cloud services. The evaluation of the prototype indicates that our solution puts only a small performance overhead on forwarding in a shared network.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122228386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Incident management is one of the most important and burdensome tasks in system management. In order to achieve effective incident management, prediction of the workload needed to solve incidents is quite useful. Using this prediction, we can provide a fair distribution of incident tickets to administrators. In order to predict the workload needed to handle an incident ticket when it arrives, we propose an incident ticket classification method based on text mining (TF-IDF and Naive Bayes). In this approach, we first collect incident tickets with their number of updates as workload indicators. Next, we construct a model representing the relation between the words in incident texts and the incident workload category (easy or difficult) based on Naive Bayes. We then predict a category into which each new incident ticket should be classified using the model. We implemented our method using Hadoop and Mahout library. By conducting the evaluation with incident tickets recorded in an cloud infrastructure for research, we confirmed that our approach can predict the workload of incident tickets with an F-measure of 0.81 in its best case.
{"title":"Prediction of Workloads in Incident Management Based on Incident Ticket Updating History","authors":"S. Kikuchi","doi":"10.1109/UCC.2015.53","DOIUrl":"https://doi.org/10.1109/UCC.2015.53","url":null,"abstract":"Incident management is one of the most important and burdensome tasks in system management. In order to achieve effective incident management, prediction of the workload needed to solve incidents is quite useful. Using this prediction, we can provide a fair distribution of incident tickets to administrators. In order to predict the workload needed to handle an incident ticket when it arrives, we propose an incident ticket classification method based on text mining (TF-IDF and Naive Bayes). In this approach, we first collect incident tickets with their number of updates as workload indicators. Next, we construct a model representing the relation between the words in incident texts and the incident workload category (easy or difficult) based on Naive Bayes. We then predict a category into which each new incident ticket should be classified using the model. We implemented our method using Hadoop and Mahout library. By conducting the evaluation with incident tickets recorded in an cloud infrastructure for research, we confirmed that our approach can predict the workload of incident tickets with an F-measure of 0.81 in its best case.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125621947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Bolla, Luigi Sambolino, Danilo Tigano, M. Repetto
Cloud computing has emerged as a flexible and efficient paradigm to provide IT resources on-demand. However, it has also raised new challenges for infrastructure providers to manage large-scale deployments in an efficient and effective way. In this paper, we present the trade-off between energy consumption and performance. We outline a novel framework for efficient and effective resource consolidation in data centers, building on latest trends in software development practice and recent standards for energy efficiency. In particular, we consider the usage of code annotations from software developers and the adoption of a "green abstraction layer" to model the trade-off between performance and energy consumption.
{"title":"Enhancing Energy-Efficient Cloud Management through Code Annotations and the Green Abstraction Layer","authors":"R. Bolla, Luigi Sambolino, Danilo Tigano, M. Repetto","doi":"10.1109/UCC.2015.95","DOIUrl":"https://doi.org/10.1109/UCC.2015.95","url":null,"abstract":"Cloud computing has emerged as a flexible and efficient paradigm to provide IT resources on-demand. However, it has also raised new challenges for infrastructure providers to manage large-scale deployments in an efficient and effective way. In this paper, we present the trade-off between energy consumption and performance. We outline a novel framework for efficient and effective resource consolidation in data centers, building on latest trends in software development practice and recent standards for energy efficiency. In particular, we consider the usage of code annotations from software developers and the adoption of a \"green abstraction layer\" to model the trade-off between performance and energy consumption.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131904306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Kontodimas, P. Kokkinos, Yossi Kuperman, Emmanouel Varvarigos
Hypervisors' smooth operation and efficient performance has an immediate effect in the supported Cloud services. We investigate scheduling algorithms that match I/O requests originated from virtual resources, to the physical CPUs that do the actual processing. We envisage a new paradigm of virtualized resource consolidation, where I/O resources required by several Virtual Machines (VMs) in different physical hosts, are provided by one (or more) external powerful dedicated appliance(s), namely the I/O Hypervisor (IOH). For this reason I/O operations are transferred from the VMs to the IOH, where they are executed. We propose and evaluate a number of scheduling algorithms for this hypervisor model, concentrating on providing guaranteed fairness among the virtual resources. A simulator has been built that describes this model and is used for the implementation and the evaluation of the algorithms. We also analyze the performance of the different hypervisor models and highlight the importance of fair scheduling.
{"title":"Analysis and Evaluation of I/O Hypervisor Scheduling","authors":"K. Kontodimas, P. Kokkinos, Yossi Kuperman, Emmanouel Varvarigos","doi":"10.1109/UCC.2015.19","DOIUrl":"https://doi.org/10.1109/UCC.2015.19","url":null,"abstract":"Hypervisors' smooth operation and efficient performance has an immediate effect in the supported Cloud services. We investigate scheduling algorithms that match I/O requests originated from virtual resources, to the physical CPUs that do the actual processing. We envisage a new paradigm of virtualized resource consolidation, where I/O resources required by several Virtual Machines (VMs) in different physical hosts, are provided by one (or more) external powerful dedicated appliance(s), namely the I/O Hypervisor (IOH). For this reason I/O operations are transferred from the VMs to the IOH, where they are executed. We propose and evaluate a number of scheduling algorithms for this hypervisor model, concentrating on providing guaranteed fairness among the virtual resources. A simulator has been built that describes this model and is used for the implementation and the evaluation of the algorithms. We also analyze the performance of the different hypervisor models and highlight the importance of fair scheduling.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127591988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}