Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.156
K. Kavoussanakis, Alastair C. Hume, Josep Martrat, C. Ragusa, M. Gienger, K. Campowsky, Gregory van Seghbroeck, Constantino Vázquez, Celia Velayos, Frederic Gittler, P. Inglesant, G. Carella, Vegard Engen, Michal Giertych, G. Landi, D. Margery
BonFIRE is a multi-site test bed that supports testing of Cloud-based and distributed applications. BonFIRE breaks the mould of commercial Cloud offerings by providing unique functionality in terms of observability, control, advanced Cloud features and ease of use for experimentation. A number of successful use cases have been executed on BonFIRE, involving industrial and academic users and delivering impact in diverse areas, such as media, e-health, environment and manufacturing. The BonFIRE user-base is expanding through its free, Open Access scheme, daily carrying out important research, while the consortium is working to sustain the facility beyond 2014.
{"title":"BonFIRE: The Clouds and Services Testbed","authors":"K. Kavoussanakis, Alastair C. Hume, Josep Martrat, C. Ragusa, M. Gienger, K. Campowsky, Gregory van Seghbroeck, Constantino Vázquez, Celia Velayos, Frederic Gittler, P. Inglesant, G. Carella, Vegard Engen, Michal Giertych, G. Landi, D. Margery","doi":"10.1109/CloudCom.2013.156","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.156","url":null,"abstract":"BonFIRE is a multi-site test bed that supports testing of Cloud-based and distributed applications. BonFIRE breaks the mould of commercial Cloud offerings by providing unique functionality in terms of observability, control, advanced Cloud features and ease of use for experimentation. A number of successful use cases have been executed on BonFIRE, involving industrial and academic users and delivering impact in diverse areas, such as media, e-health, environment and manufacturing. The BonFIRE user-base is expanding through its free, Open Access scheme, daily carrying out important research, while the consortium is working to sustain the facility beyond 2014.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130682690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.100
J. Prüfer
This paper applies economic governance theory to the cloud computing industry. We analyze which governance institution may be best suited to solve the problems stemming from asymmetric information about the true level of data protection, security, and accountability offered by cloud service providers. We conclude that certification agencies - private, independent organizations which award certificates to cloud service providers meeting certain technical and organizational criteria - are the optimal institution available. Those users with high valuation for accountability will be willing to pay more for the services of certified providers, whereas other users may patronize uncertified providers.
{"title":"How to Govern the Cloud? Characterizing the Optimal Enforcement Institution that Supports Accountability in Cloud Computing","authors":"J. Prüfer","doi":"10.1109/CloudCom.2013.100","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.100","url":null,"abstract":"This paper applies economic governance theory to the cloud computing industry. We analyze which governance institution may be best suited to solve the problems stemming from asymmetric information about the true level of data protection, security, and accountability offered by cloud service providers. We conclude that certification agencies - private, independent organizations which award certificates to cloud service providers meeting certain technical and organizational criteria - are the optimal institution available. Those users with high valuation for accountability will be willing to pay more for the services of certified providers, whereas other users may patronize uncertified providers.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"755 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117007202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.169
R. Hammad, M. Odeh, Z. Khan
The e-Learning domain is evolving rapidly due to a number of factors and amongst these are the two key factors: i) availability of new ICT tools and technologies such as cloud computing, ontologies and smart phones, and ii) application of various learning theories and the development of new learning models. The latter is anticipated to generate new sets of requirements for the development of new e-Learning for the cloud environment. This paper is an attempt towards developing a generic requirements model for hybrid cloud-based e-Learning systems with particular reference to e-learning systems' requirements in general, pedagogical requirements, technical requirements including non-functional requirements, and the mapping of these requirements to cloud-based e-learning environments.
{"title":"Towards A Generic Requirements Model for Hybrid and Cloud-based e-Learning Systems","authors":"R. Hammad, M. Odeh, Z. Khan","doi":"10.1109/CloudCom.2013.169","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.169","url":null,"abstract":"The e-Learning domain is evolving rapidly due to a number of factors and amongst these are the two key factors: i) availability of new ICT tools and technologies such as cloud computing, ontologies and smart phones, and ii) application of various learning theories and the development of new learning models. The latter is anticipated to generate new sets of requirements for the development of new e-Learning for the cloud environment. This paper is an attempt towards developing a generic requirements model for hybrid cloud-based e-Learning systems with particular reference to e-learning systems' requirements in general, pedagogical requirements, technical requirements including non-functional requirements, and the mapping of these requirements to cloud-based e-learning environments.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131895443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.28
Xiaolong Cui, Xuelian Lin, Chunming Hu, Richong Zhang, Chengzhang Wang
MapReduce is a widely used programming model for large scale data processing. In order to estimate the performance of MapReduce job and analyze the bottleneck of MapReduce job, a practical performance model for MapReduce is needed. Many works have been done on modeling the performance of MapReduce jobs. However, existing performance models ignore some important factors, such as I/O congestions and task failures over cluster, which may significantly change the execution costs of MapReduce job. This paper, aiming at predicting the execution time of a MapReduce job, presents an enhanced performance model that takes the resource contention and task failures into consideration. In addition, the experimental results show that the model is more accurate than those without considering the contention and failure factors.
{"title":"Modeling the Performance of MapReduce under Resource Contentions and Task Failures","authors":"Xiaolong Cui, Xuelian Lin, Chunming Hu, Richong Zhang, Chengzhang Wang","doi":"10.1109/CloudCom.2013.28","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.28","url":null,"abstract":"MapReduce is a widely used programming model for large scale data processing. In order to estimate the performance of MapReduce job and analyze the bottleneck of MapReduce job, a practical performance model for MapReduce is needed. Many works have been done on modeling the performance of MapReduce jobs. However, existing performance models ignore some important factors, such as I/O congestions and task failures over cluster, which may significantly change the execution costs of MapReduce job. This paper, aiming at predicting the execution time of a MapReduce job, presents an enhanced performance model that takes the resource contention and task failures into consideration. In addition, the experimental results show that the model is more accurate than those without considering the contention and failure factors.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132050807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.17
Wei Cai, Conghui Zhou, Victor C. M. Leung, Min Chen
Mobile cloud gaming provides a whole new service model for the video game industry to overcome the intrinsic restrictions of mobile devices and piracy issues. However, the diversity of end-user devices and frequent changes in network quality of service and cloud responses result in unstable Quality of Experience (QoE) for game players. A cognitive cloud gaming platform, which could overcome the above problem by learning about the game player's environment and adapting the cloud gaming service accordingly, does not currently exist. To fill this void, we design and implement a component-based gaming platform that supports click-and-play, intelligent resource allocation and partial offline execution, to provide cognitive capabilities across the cloud gaming system. Extensive experiments have been performed to show that intelligent partitioning leads to better system performance, such as overall latency.
{"title":"A Cognitive Platform for Mobile Cloud Gaming","authors":"Wei Cai, Conghui Zhou, Victor C. M. Leung, Min Chen","doi":"10.1109/CloudCom.2013.17","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.17","url":null,"abstract":"Mobile cloud gaming provides a whole new service model for the video game industry to overcome the intrinsic restrictions of mobile devices and piracy issues. However, the diversity of end-user devices and frequent changes in network quality of service and cloud responses result in unstable Quality of Experience (QoE) for game players. A cognitive cloud gaming platform, which could overcome the above problem by learning about the game player's environment and adapting the cloud gaming service accordingly, does not currently exist. To fill this void, we design and implement a component-based gaming platform that supports click-and-play, intelligent resource allocation and partial offline execution, to provide cognitive capabilities across the cloud gaming system. Extensive experiments have been performed to show that intelligent partitioning leads to better system performance, such as overall latency.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132082423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CLOUDCOM.2013.105
R. Strijkers, R. Cushing, M. Makkes, P. Meulenhoff, A. Belloum, C. D. Laat, R. Meijer
Cyber physical systems, such as intelligent dikes and smart energy systems, require scalable and flexible computing infrastructures to process data from instruments and sensor networks. Infrastructure as a Service clouds provide a flexible way to allocate remote distributed resources, but lack mechanisms to dynamically configure software (dependencies) and manage application execution. This paper describes the design and implementation of the Intercloud Operating System (ICOS), which acts between applications and distributed clouds, i.e., the Intercloud. ICOS schedules, configures, and executes applications in the Intercloud while taking data dependencies, budgets, and deadlines into account. Based on our experiences with the prototype, we present considerations and additional research challenges. The research on ICOS clarifies essential concepts needed to realize a flexible and scalable on-demand execution platform for distributed applications over distributed cloud providers.
{"title":"Towards an Operating System for Intercloud","authors":"R. Strijkers, R. Cushing, M. Makkes, P. Meulenhoff, A. Belloum, C. D. Laat, R. Meijer","doi":"10.1109/CLOUDCOM.2013.105","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2013.105","url":null,"abstract":"Cyber physical systems, such as intelligent dikes and smart energy systems, require scalable and flexible computing infrastructures to process data from instruments and sensor networks. Infrastructure as a Service clouds provide a flexible way to allocate remote distributed resources, but lack mechanisms to dynamically configure software (dependencies) and manage application execution. This paper describes the design and implementation of the Intercloud Operating System (ICOS), which acts between applications and distributed clouds, i.e., the Intercloud. ICOS schedules, configures, and executes applications in the Intercloud while taking data dependencies, budgets, and deadlines into account. Based on our experiences with the prototype, we present considerations and additional research challenges. The research on ICOS clarifies essential concepts needed to realize a flexible and scalable on-demand execution platform for distributed applications over distributed cloud providers.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128992723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.80
J. Magalhães, L. Silva
The adaptation of a cloud infrastructure is an ongoing process. Cloud adaptation aims to provide the cloud infrastructure with the necessary computational resources to meet the agreed SLAs and, simultaneously, optimize the resources usage. In a cloud, the consumers are typically limited to the SLAs defined in advance with the cloud service provider. This creates a strong dependence in the cloud provider, and gives little room for maneuver when the cloud customers need to adapt the infrastructure very quickly to avoid service degradations. In this paper we present a framework that aims to reduce this gap. The SHõWA framework is targeted for self-healing Web-based applications. It detects workload and performance anomalies from the consumer perspective and interacts with the cloud service provider to dynamically adjust the infrastructure. From the experimental study conducted, is noteworthy the role of SHõWA to avoid the degradation of service upon the occurrence of load and resource contention scenarios.
{"title":"A Framework for Self-Healing and Self-Adaptation of Cloud-Hosted Web-Based Applications","authors":"J. Magalhães, L. Silva","doi":"10.1109/CloudCom.2013.80","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.80","url":null,"abstract":"The adaptation of a cloud infrastructure is an ongoing process. Cloud adaptation aims to provide the cloud infrastructure with the necessary computational resources to meet the agreed SLAs and, simultaneously, optimize the resources usage. In a cloud, the consumers are typically limited to the SLAs defined in advance with the cloud service provider. This creates a strong dependence in the cloud provider, and gives little room for maneuver when the cloud customers need to adapt the infrastructure very quickly to avoid service degradations. In this paper we present a framework that aims to reduce this gap. The SHõWA framework is targeted for self-healing Web-based applications. It detects workload and performance anomalies from the consumer perspective and interacts with the cloud service provider to dynamically adjust the infrastructure. From the experimental study conducted, is noteworthy the role of SHõWA to avoid the degradation of service upon the occurrence of load and resource contention scenarios.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115918646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.36
Aissan Dalvandi, G. Mohan, K. Chua
Variation in network performance due to the shared resources is a key obstacle for cloud adoption. Thus, the success of cloud providers to attract more tenants depends on their ability to provide bandwidth guarantees. Power efficiency in data centers has become critically important for supporting larger number of tenants. In this paper, we address the problem of time-aware VM-placement and routing (TVPR), where each tenant requests for a specified amount of server resources (VMs) and network resource (bandwidth) for a given duration. The TVPR problem allocates the required resources for as many tenants as possible by finding the right set of servers to map their VMs and routing their traffic so as to minimize the total power consumption. We propose a multi-component utilization-based power model to determine the total power consumption of a data center according to the resource utilization of the components (servers and switches). We then develop a mixed integer linear programming (MILP) optimization problem formulation based on the proposed power model and prove it to be N P-complete. Since the TVPR problem is computationally prohibitive, we develop a fast and scalable heuristic algorithm. To demonstrate the efficiency of our proposed algorithm, we compare its performance with the numerical results obtained by solving the MILP problem using CPLEX, for a small data center. We then demonstrate the effectiveness of the proposed algorithm in terms of power consumption and acceptance ratio for large data centers through simulation results.
{"title":"Time-Aware VM-Placement and Routing with Bandwidth Guarantees in Green Cloud Data Centers","authors":"Aissan Dalvandi, G. Mohan, K. Chua","doi":"10.1109/CloudCom.2013.36","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.36","url":null,"abstract":"Variation in network performance due to the shared resources is a key obstacle for cloud adoption. Thus, the success of cloud providers to attract more tenants depends on their ability to provide bandwidth guarantees. Power efficiency in data centers has become critically important for supporting larger number of tenants. In this paper, we address the problem of time-aware VM-placement and routing (TVPR), where each tenant requests for a specified amount of server resources (VMs) and network resource (bandwidth) for a given duration. The TVPR problem allocates the required resources for as many tenants as possible by finding the right set of servers to map their VMs and routing their traffic so as to minimize the total power consumption. We propose a multi-component utilization-based power model to determine the total power consumption of a data center according to the resource utilization of the components (servers and switches). We then develop a mixed integer linear programming (MILP) optimization problem formulation based on the proposed power model and prove it to be N P-complete. Since the TVPR problem is computationally prohibitive, we develop a fast and scalable heuristic algorithm. To demonstrate the efficiency of our proposed algorithm, we compare its performance with the numerical results obtained by solving the MILP problem using CPLEX, for a small data center. We then demonstrate the effectiveness of the proposed algorithm in terms of power consumption and acceptance ratio for large data centers through simulation results.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116345927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.145
Martin Henze, Marcel Grossfengels, Maik Koprowski, Klaus Wehrle
The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack - even if they were willing to meet these requirements - can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.
{"title":"Towards Data Handling Requirements-Aware Cloud Computing","authors":"Martin Henze, Marcel Grossfengels, Maik Koprowski, Klaus Wehrle","doi":"10.1109/CloudCom.2013.145","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.145","url":null,"abstract":"The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack - even if they were willing to meet these requirements - can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114497241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.35
Wei-Chu Lin, Chien-Hui Liao, Kuan-Tsen Kuo, Charles H.-P. Wen
Minimizing energy consumption and improving performance in data centers are critical to cost-saving for cloud operators, but traditionally, these two optimization objectives are treated separately. Therefore, this paper presents an unified solution combining two strategies, flow migration and VM migration, to maximize throughput and minimize energy, simultaneously. Traffic-aware flow migration (FM) is first incorporated in dynamic reroute (DENDIST), evolving into DENDIST-FM, in a software-defined network (SDN) for improving throughput and avoiding congestion. Second, given energy and topology information, VM migration (ETA-VMM) can help reduce traffic loads and meanwhile save energy. Our experimental result indicates that compared to previous works, the proposed method can improve throughput by 42.5% on average with only 2.2% energy overhead. Accordingly, the unified flow-and-VM migration solution has been proven effective for optimizing throughput and energy in SDN-based cloud data centers.
{"title":"Flow-and-VM Migration for Optimizing Throughput and Energy in SDN-Based Cloud Datacenter","authors":"Wei-Chu Lin, Chien-Hui Liao, Kuan-Tsen Kuo, Charles H.-P. Wen","doi":"10.1109/CloudCom.2013.35","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.35","url":null,"abstract":"Minimizing energy consumption and improving performance in data centers are critical to cost-saving for cloud operators, but traditionally, these two optimization objectives are treated separately. Therefore, this paper presents an unified solution combining two strategies, flow migration and VM migration, to maximize throughput and minimize energy, simultaneously. Traffic-aware flow migration (FM) is first incorporated in dynamic reroute (DENDIST), evolving into DENDIST-FM, in a software-defined network (SDN) for improving throughput and avoiding congestion. Second, given energy and topology information, VM migration (ETA-VMM) can help reduce traffic loads and meanwhile save energy. Our experimental result indicates that compared to previous works, the proposed method can improve throughput by 42.5% on average with only 2.2% energy overhead. Accordingly, the unified flow-and-VM migration solution has been proven effective for optimizing throughput and energy in SDN-based cloud data centers.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114648312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}