Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.77
M. Mohanty, Wei Tsang Ooi, P. Atrey
Advances in cloud computing have allowed volume rendering tasks, typically done by volume ray-casting, to be outsourced to cloud data centers. The availability of volume data and rendered images (which can contain important information such as the disease information of a patient) to a third-party cloud provider, however, presents security and privacy challenges. This paper addresses these challenges by proposing a secure cloud-based volume ray-casting framework that distributes the rendering tasks among the data centers and hides the information that is exchanged between the server and a data center, between two data centers, and between a data center and the client by using Shamir's secret sharing, such that none of the data centers has enough information to know the secret data and/or rendered image. Experiments and analyses show that our framework is highly secure and requires low computation cost.
{"title":"Secure Cloud-Based Volume Ray-Casting","authors":"M. Mohanty, Wei Tsang Ooi, P. Atrey","doi":"10.1109/CloudCom.2013.77","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.77","url":null,"abstract":"Advances in cloud computing have allowed volume rendering tasks, typically done by volume ray-casting, to be outsourced to cloud data centers. The availability of volume data and rendered images (which can contain important information such as the disease information of a patient) to a third-party cloud provider, however, presents security and privacy challenges. This paper addresses these challenges by proposing a secure cloud-based volume ray-casting framework that distributes the rendering tasks among the data centers and hides the information that is exchanged between the server and a data center, between two data centers, and between a data center and the client by using Shamir's secret sharing, such that none of the data centers has enough information to know the secret data and/or rendered image. Experiments and analyses show that our framework is highly secure and requires low computation cost.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115582118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.75
Ronan-Alexandre Cherrueau, Mario Südholt, O. Chebaro
Existing approaches to the adaptation of workflows over Web services fall short in two respects. First, they only provide, if ever, limited means for taking into account the execution history of a workflow. Second, they do not support adaptations that require modifications not only at the service composition level but also at the levels of interceptors and service implementations. This is particular problematic for the enforcement of security properties over workflows: enforcing authorization properties, for instance, frequently requires execution contexts to be defined and modifications to be applied at all these abstraction levels of Web services. We present two main contributions in this context. First, we introduce workflow adaptation schemas (WAS), a new notion of generic protocol-based workflow adapters. WAS enable the declarative definition of adaptations involving complex service compositions and implementations. Second, we present two real-world security issues related to the use of OAuth 2.0, a recent and widely used framework for the authorization of resource accesses. As we motivate, these security issues require history-based adaptations over different abstraction levels of services. We then show how to resolve these issues using WAS.
{"title":"Adapting Workflows Using Generic Schemas: Application to the Security of Business Processes","authors":"Ronan-Alexandre Cherrueau, Mario Südholt, O. Chebaro","doi":"10.1109/CloudCom.2013.75","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.75","url":null,"abstract":"Existing approaches to the adaptation of workflows over Web services fall short in two respects. First, they only provide, if ever, limited means for taking into account the execution history of a workflow. Second, they do not support adaptations that require modifications not only at the service composition level but also at the levels of interceptors and service implementations. This is particular problematic for the enforcement of security properties over workflows: enforcing authorization properties, for instance, frequently requires execution contexts to be defined and modifications to be applied at all these abstraction levels of Web services. We present two main contributions in this context. First, we introduce workflow adaptation schemas (WAS), a new notion of generic protocol-based workflow adapters. WAS enable the declarative definition of adaptations involving complex service compositions and implementations. Second, we present two real-world security issues related to the use of OAuth 2.0, a recent and widely used framework for the authorization of resource accesses. As we motivate, these security issues require history-based adaptations over different abstraction levels of services. We then show how to resolve these issues using WAS.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123712630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.156
K. Kavoussanakis, Alastair C. Hume, Josep Martrat, C. Ragusa, M. Gienger, K. Campowsky, Gregory van Seghbroeck, Constantino Vázquez, Celia Velayos, Frederic Gittler, P. Inglesant, G. Carella, Vegard Engen, Michal Giertych, G. Landi, D. Margery
BonFIRE is a multi-site test bed that supports testing of Cloud-based and distributed applications. BonFIRE breaks the mould of commercial Cloud offerings by providing unique functionality in terms of observability, control, advanced Cloud features and ease of use for experimentation. A number of successful use cases have been executed on BonFIRE, involving industrial and academic users and delivering impact in diverse areas, such as media, e-health, environment and manufacturing. The BonFIRE user-base is expanding through its free, Open Access scheme, daily carrying out important research, while the consortium is working to sustain the facility beyond 2014.
{"title":"BonFIRE: The Clouds and Services Testbed","authors":"K. Kavoussanakis, Alastair C. Hume, Josep Martrat, C. Ragusa, M. Gienger, K. Campowsky, Gregory van Seghbroeck, Constantino Vázquez, Celia Velayos, Frederic Gittler, P. Inglesant, G. Carella, Vegard Engen, Michal Giertych, G. Landi, D. Margery","doi":"10.1109/CloudCom.2013.156","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.156","url":null,"abstract":"BonFIRE is a multi-site test bed that supports testing of Cloud-based and distributed applications. BonFIRE breaks the mould of commercial Cloud offerings by providing unique functionality in terms of observability, control, advanced Cloud features and ease of use for experimentation. A number of successful use cases have been executed on BonFIRE, involving industrial and academic users and delivering impact in diverse areas, such as media, e-health, environment and manufacturing. The BonFIRE user-base is expanding through its free, Open Access scheme, daily carrying out important research, while the consortium is working to sustain the facility beyond 2014.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130682690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.144
B. Duncan, D. Pym, M. Whittington
Managing information security in the cloud is a challenge. Traditional checklist approaches to standards compliance may well provide compliance, but do not guarantee to provide security assurance. The complexity of cloud relationships must be acknowledged and explicitly managed by recognising the implications of self-interest of each party involved. We begin development of a conceptual modelling framework for cloud security assurance that can be used as a starting point for effective continuous security assurance, together with a high level of compliance.
{"title":"Developing a Conceptual Framework for Cloud Security Assurance","authors":"B. Duncan, D. Pym, M. Whittington","doi":"10.1109/CloudCom.2013.144","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.144","url":null,"abstract":"Managing information security in the cloud is a challenge. Traditional checklist approaches to standards compliance may well provide compliance, but do not guarantee to provide security assurance. The complexity of cloud relationships must be acknowledged and explicitly managed by recognising the implications of self-interest of each party involved. We begin development of a conceptual modelling framework for cloud security assurance that can be used as a starting point for effective continuous security assurance, together with a high level of compliance.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126059709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CLOUDCOM.2013.161
T. Kirkham, B. Matthews, K. Jeffery, K. Djemame, Django Armstrong
Resource usage in Clouds can be improved by deploying applications with richer defined requirements. Such "richer requirements" involve wider application / user specific context capture expressed in interrelated models. The use of model based requirements is presented using input from test-beds monitoring resource use in terms of Trust, Risk, Eco-Efficiency and Cost (TREC) models. The results of this application illustrate the potential that richer requirements have for better management of resources in Clouds.
{"title":"Richer Requirements for Better Clouds","authors":"T. Kirkham, B. Matthews, K. Jeffery, K. Djemame, Django Armstrong","doi":"10.1109/CLOUDCOM.2013.161","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2013.161","url":null,"abstract":"Resource usage in Clouds can be improved by deploying applications with richer defined requirements. Such \"richer requirements\" involve wider application / user specific context capture expressed in interrelated models. The use of model based requirements is presented using input from test-beds monitoring resource use in terms of Trust, Risk, Eco-Efficiency and Cost (TREC) models. The results of this application illustrate the potential that richer requirements have for better management of resources in Clouds.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129312457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CLOUDCOM.2013.105
R. Strijkers, R. Cushing, M. Makkes, P. Meulenhoff, A. Belloum, C. D. Laat, R. Meijer
Cyber physical systems, such as intelligent dikes and smart energy systems, require scalable and flexible computing infrastructures to process data from instruments and sensor networks. Infrastructure as a Service clouds provide a flexible way to allocate remote distributed resources, but lack mechanisms to dynamically configure software (dependencies) and manage application execution. This paper describes the design and implementation of the Intercloud Operating System (ICOS), which acts between applications and distributed clouds, i.e., the Intercloud. ICOS schedules, configures, and executes applications in the Intercloud while taking data dependencies, budgets, and deadlines into account. Based on our experiences with the prototype, we present considerations and additional research challenges. The research on ICOS clarifies essential concepts needed to realize a flexible and scalable on-demand execution platform for distributed applications over distributed cloud providers.
{"title":"Towards an Operating System for Intercloud","authors":"R. Strijkers, R. Cushing, M. Makkes, P. Meulenhoff, A. Belloum, C. D. Laat, R. Meijer","doi":"10.1109/CLOUDCOM.2013.105","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2013.105","url":null,"abstract":"Cyber physical systems, such as intelligent dikes and smart energy systems, require scalable and flexible computing infrastructures to process data from instruments and sensor networks. Infrastructure as a Service clouds provide a flexible way to allocate remote distributed resources, but lack mechanisms to dynamically configure software (dependencies) and manage application execution. This paper describes the design and implementation of the Intercloud Operating System (ICOS), which acts between applications and distributed clouds, i.e., the Intercloud. ICOS schedules, configures, and executes applications in the Intercloud while taking data dependencies, budgets, and deadlines into account. Based on our experiences with the prototype, we present considerations and additional research challenges. The research on ICOS clarifies essential concepts needed to realize a flexible and scalable on-demand execution platform for distributed applications over distributed cloud providers.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128992723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.80
J. Magalhães, L. Silva
The adaptation of a cloud infrastructure is an ongoing process. Cloud adaptation aims to provide the cloud infrastructure with the necessary computational resources to meet the agreed SLAs and, simultaneously, optimize the resources usage. In a cloud, the consumers are typically limited to the SLAs defined in advance with the cloud service provider. This creates a strong dependence in the cloud provider, and gives little room for maneuver when the cloud customers need to adapt the infrastructure very quickly to avoid service degradations. In this paper we present a framework that aims to reduce this gap. The SHõWA framework is targeted for self-healing Web-based applications. It detects workload and performance anomalies from the consumer perspective and interacts with the cloud service provider to dynamically adjust the infrastructure. From the experimental study conducted, is noteworthy the role of SHõWA to avoid the degradation of service upon the occurrence of load and resource contention scenarios.
{"title":"A Framework for Self-Healing and Self-Adaptation of Cloud-Hosted Web-Based Applications","authors":"J. Magalhães, L. Silva","doi":"10.1109/CloudCom.2013.80","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.80","url":null,"abstract":"The adaptation of a cloud infrastructure is an ongoing process. Cloud adaptation aims to provide the cloud infrastructure with the necessary computational resources to meet the agreed SLAs and, simultaneously, optimize the resources usage. In a cloud, the consumers are typically limited to the SLAs defined in advance with the cloud service provider. This creates a strong dependence in the cloud provider, and gives little room for maneuver when the cloud customers need to adapt the infrastructure very quickly to avoid service degradations. In this paper we present a framework that aims to reduce this gap. The SHõWA framework is targeted for self-healing Web-based applications. It detects workload and performance anomalies from the consumer perspective and interacts with the cloud service provider to dynamically adjust the infrastructure. From the experimental study conducted, is noteworthy the role of SHõWA to avoid the degradation of service upon the occurrence of load and resource contention scenarios.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115918646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.36
Aissan Dalvandi, G. Mohan, K. Chua
Variation in network performance due to the shared resources is a key obstacle for cloud adoption. Thus, the success of cloud providers to attract more tenants depends on their ability to provide bandwidth guarantees. Power efficiency in data centers has become critically important for supporting larger number of tenants. In this paper, we address the problem of time-aware VM-placement and routing (TVPR), where each tenant requests for a specified amount of server resources (VMs) and network resource (bandwidth) for a given duration. The TVPR problem allocates the required resources for as many tenants as possible by finding the right set of servers to map their VMs and routing their traffic so as to minimize the total power consumption. We propose a multi-component utilization-based power model to determine the total power consumption of a data center according to the resource utilization of the components (servers and switches). We then develop a mixed integer linear programming (MILP) optimization problem formulation based on the proposed power model and prove it to be N P-complete. Since the TVPR problem is computationally prohibitive, we develop a fast and scalable heuristic algorithm. To demonstrate the efficiency of our proposed algorithm, we compare its performance with the numerical results obtained by solving the MILP problem using CPLEX, for a small data center. We then demonstrate the effectiveness of the proposed algorithm in terms of power consumption and acceptance ratio for large data centers through simulation results.
{"title":"Time-Aware VM-Placement and Routing with Bandwidth Guarantees in Green Cloud Data Centers","authors":"Aissan Dalvandi, G. Mohan, K. Chua","doi":"10.1109/CloudCom.2013.36","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.36","url":null,"abstract":"Variation in network performance due to the shared resources is a key obstacle for cloud adoption. Thus, the success of cloud providers to attract more tenants depends on their ability to provide bandwidth guarantees. Power efficiency in data centers has become critically important for supporting larger number of tenants. In this paper, we address the problem of time-aware VM-placement and routing (TVPR), where each tenant requests for a specified amount of server resources (VMs) and network resource (bandwidth) for a given duration. The TVPR problem allocates the required resources for as many tenants as possible by finding the right set of servers to map their VMs and routing their traffic so as to minimize the total power consumption. We propose a multi-component utilization-based power model to determine the total power consumption of a data center according to the resource utilization of the components (servers and switches). We then develop a mixed integer linear programming (MILP) optimization problem formulation based on the proposed power model and prove it to be N P-complete. Since the TVPR problem is computationally prohibitive, we develop a fast and scalable heuristic algorithm. To demonstrate the efficiency of our proposed algorithm, we compare its performance with the numerical results obtained by solving the MILP problem using CPLEX, for a small data center. We then demonstrate the effectiveness of the proposed algorithm in terms of power consumption and acceptance ratio for large data centers through simulation results.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116345927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.145
Martin Henze, Marcel Grossfengels, Maik Koprowski, Klaus Wehrle
The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack - even if they were willing to meet these requirements - can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.
{"title":"Towards Data Handling Requirements-Aware Cloud Computing","authors":"Martin Henze, Marcel Grossfengels, Maik Koprowski, Klaus Wehrle","doi":"10.1109/CloudCom.2013.145","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.145","url":null,"abstract":"The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack - even if they were willing to meet these requirements - can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114497241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.35
Wei-Chu Lin, Chien-Hui Liao, Kuan-Tsen Kuo, Charles H.-P. Wen
Minimizing energy consumption and improving performance in data centers are critical to cost-saving for cloud operators, but traditionally, these two optimization objectives are treated separately. Therefore, this paper presents an unified solution combining two strategies, flow migration and VM migration, to maximize throughput and minimize energy, simultaneously. Traffic-aware flow migration (FM) is first incorporated in dynamic reroute (DENDIST), evolving into DENDIST-FM, in a software-defined network (SDN) for improving throughput and avoiding congestion. Second, given energy and topology information, VM migration (ETA-VMM) can help reduce traffic loads and meanwhile save energy. Our experimental result indicates that compared to previous works, the proposed method can improve throughput by 42.5% on average with only 2.2% energy overhead. Accordingly, the unified flow-and-VM migration solution has been proven effective for optimizing throughput and energy in SDN-based cloud data centers.
{"title":"Flow-and-VM Migration for Optimizing Throughput and Energy in SDN-Based Cloud Datacenter","authors":"Wei-Chu Lin, Chien-Hui Liao, Kuan-Tsen Kuo, Charles H.-P. Wen","doi":"10.1109/CloudCom.2013.35","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.35","url":null,"abstract":"Minimizing energy consumption and improving performance in data centers are critical to cost-saving for cloud operators, but traditionally, these two optimization objectives are treated separately. Therefore, this paper presents an unified solution combining two strategies, flow migration and VM migration, to maximize throughput and minimize energy, simultaneously. Traffic-aware flow migration (FM) is first incorporated in dynamic reroute (DENDIST), evolving into DENDIST-FM, in a software-defined network (SDN) for improving throughput and avoiding congestion. Second, given energy and topology information, VM migration (ETA-VMM) can help reduce traffic loads and meanwhile save energy. Our experimental result indicates that compared to previous works, the proposed method can improve throughput by 42.5% on average with only 2.2% energy overhead. Accordingly, the unified flow-and-VM migration solution has been proven effective for optimizing throughput and energy in SDN-based cloud data centers.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114648312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}