Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.83
Y. Mansouri, A. Toosi, R. Buyya
In recent years, cloud storage providers have gained popularity for personal and organizational data, and provided highly reliable, scalable and flexible resources to cloud users. Although cloud providers bring advantages to their users, most cloud providers suffer outages from time-to-time. Therefore, relying on a single cloud storage services threatens service availability of cloud users. We believe that using multi-cloud broker is a plausible solution to remove single point of failure and to achieve very high availability. Since highly reliable cloud storage services impose enormous cost to the user, and also as the size of data objects in the cloud storage reaches magnitude of exabyte, optimal selection among a set of cloud storage providers is a crucial decision for users. To solve this problem, we propose an algorithm that determines the minimum replication cost of objects such that the expected availability for users is guaranteed. We also propose an algorithm to optimally select data centers for striped objects such that the expected availability under a given budget is maximized. Simulation experiments are conducted to evaluate our algorithms, using failure probability and storage cost taken from real cloud storage providers.
{"title":"Brokering Algorithms for Optimizing the Availability and Cost of Cloud Storage Services","authors":"Y. Mansouri, A. Toosi, R. Buyya","doi":"10.1109/CloudCom.2013.83","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.83","url":null,"abstract":"In recent years, cloud storage providers have gained popularity for personal and organizational data, and provided highly reliable, scalable and flexible resources to cloud users. Although cloud providers bring advantages to their users, most cloud providers suffer outages from time-to-time. Therefore, relying on a single cloud storage services threatens service availability of cloud users. We believe that using multi-cloud broker is a plausible solution to remove single point of failure and to achieve very high availability. Since highly reliable cloud storage services impose enormous cost to the user, and also as the size of data objects in the cloud storage reaches magnitude of exabyte, optimal selection among a set of cloud storage providers is a crucial decision for users. To solve this problem, we propose an algorithm that determines the minimum replication cost of objects such that the expected availability for users is guaranteed. We also propose an algorithm to optimally select data centers for striped objects such that the expected availability under a given budget is maximized. Simulation experiments are conducted to evaluate our algorithms, using failure probability and storage cost taken from real cloud storage providers.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129678415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.167
K. Kanagala, K. Sekaran
Elasticity is one of the key governing properties of cloud computing that has major effects on cost and performance directly. Most of the popular Infrastructure as a Service (IaaS) providers such as Amazon Web Services (AWS), Windows Azure, Rack space etc. work on threshold-based auto-scaling. In current IaaS environments there are various other factors like "Virtual Machine (VM)-turnaround time", "VM-stabilization time" etc. that affect the newly started VM from start time to request servicing time. If these factors are not considered while auto-scaling, then they will have direct effect on Service Level Agreement (SLA) implementations and users' response time. Therefore, these thresholds should be a function of load trend, which makes VM readily available when needed. Hence, we developed an approach where the thresholds adapt in advance and these thresholds are functions of all the above mentioned factors. Our experimental results show that our approach gives the better response time.
{"title":"An Approach for Dynamic Scaling of Resources in Enterprise Cloud","authors":"K. Kanagala, K. Sekaran","doi":"10.1109/CloudCom.2013.167","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.167","url":null,"abstract":"Elasticity is one of the key governing properties of cloud computing that has major effects on cost and performance directly. Most of the popular Infrastructure as a Service (IaaS) providers such as Amazon Web Services (AWS), Windows Azure, Rack space etc. work on threshold-based auto-scaling. In current IaaS environments there are various other factors like \"Virtual Machine (VM)-turnaround time\", \"VM-stabilization time\" etc. that affect the newly started VM from start time to request servicing time. If these factors are not considered while auto-scaling, then they will have direct effect on Service Level Agreement (SLA) implementations and users' response time. Therefore, these thresholds should be a function of load trend, which makes VM readily available when needed. Hence, we developed an approach where the thresholds adapt in advance and these thresholds are functions of all the above mentioned factors. Our experimental results show that our approach gives the better response time.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134161845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.55
Pramod A. Jamkhedkar, Jakub Szefer, Diego Perez-Botero, Tianwei Zhang, G. Triolo, R. Lee
In this paper we present our vision for Security on Demand in cloud computing: a system where cloud providers can offer customized security for customers' code and data throughout the term of contract. Security on demand enables security-focussed competitive service differentiation and pricing, based on a threat model that matches the customer's security requirements for the virtual machine he is leasing. It also enables a cloud provider to bring in new secure servers to the data center, and derive revenue from these servers, while still using existing servers. We show a framework where customers' security requests can be expressed and enforced by leveraging the capabilities of servers with different security architectures.
{"title":"A Framework for Realizing Security on Demand in Cloud Computing","authors":"Pramod A. Jamkhedkar, Jakub Szefer, Diego Perez-Botero, Tianwei Zhang, G. Triolo, R. Lee","doi":"10.1109/CloudCom.2013.55","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.55","url":null,"abstract":"In this paper we present our vision for Security on Demand in cloud computing: a system where cloud providers can offer customized security for customers' code and data throughout the term of contract. Security on demand enables security-focussed competitive service differentiation and pricing, based on a threat model that matches the customer's security requirements for the virtual machine he is leasing. It also enables a cloud provider to bring in new secure servers to the data center, and derive revenue from these servers, while still using existing servers. We show a framework where customers' security requests can be expressed and enforced by leveraging the capabilities of servers with different security architectures.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130994141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.131
Gabriel Costa Silva, Louis M. Rose, R. Calinescu
Due to the heterogeneity of today's cloud providers, migrating applications between providers is extremely challenging. This lack of portability is caused, in part, by vendor lock-in: the strong dependency created between a cloud user and a cloud provider since the cloud user deploys their software on a specific cloud platform. This paper outlines our plans to address vendor lock-in by applying techniques from the area of model-driven engineering (MDE), a contemporary and principled approach to software engineering that has sometimes been used to achieve greater portability of software. This paper presents preliminary models of two widely used IaaS services and an analysis of literature reporting real cases of software migration, and introduces a research question and method for our future work on using MDE to address vendor lock-in for cloud computing.
{"title":"Towards a Model-Driven Solution to the Vendor Lock-In Problem in Cloud Computing","authors":"Gabriel Costa Silva, Louis M. Rose, R. Calinescu","doi":"10.1109/CloudCom.2013.131","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.131","url":null,"abstract":"Due to the heterogeneity of today's cloud providers, migrating applications between providers is extremely challenging. This lack of portability is caused, in part, by vendor lock-in: the strong dependency created between a cloud user and a cloud provider since the cloud user deploys their software on a specific cloud platform. This paper outlines our plans to address vendor lock-in by applying techniques from the area of model-driven engineering (MDE), a contemporary and principled approach to software engineering that has sometimes been used to achieve greater portability of software. This paper presents preliminary models of two widely used IaaS services and an analysis of literature reporting real cases of software migration, and introduces a research question and method for our future work on using MDE to address vendor lock-in for cloud computing.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133410749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.92
Xin Jin, Yu-Kwong Kwok, Yong Yan
In current IaaS cloud markets, tenant consumers non-cooperatively compete for cloud resources via demand quantities, and the service quality is offered in a best effort manner. To better exploit tenant demand correlation, cloud brokerage services provide cloud resource multiplexing so as to earn profits by receiving volume discounts from cloud providers. A fundamental but daunting problem facing a tenant consumer is competitive resource procurements via cloud brokerage. In this paper, we investigate this problem via non-cooperative game modeling. In the static game, to maximize the experienced surplus, tenants judiciously select optimal demand responses given pricing strategies of cloud brokers and complete information of the other tenants' demands. We also derive Nash equilibrium of the non-cooperative game for competitive resource procurements. Performance evaluation on Nash equilibrium reveals insightful observations for both theoretical analysis and practical cloud resource procurements scheme design.
{"title":"Competitive Cloud Resource Procurements via Cloud Brokerage","authors":"Xin Jin, Yu-Kwong Kwok, Yong Yan","doi":"10.1109/CloudCom.2013.92","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.92","url":null,"abstract":"In current IaaS cloud markets, tenant consumers non-cooperatively compete for cloud resources via demand quantities, and the service quality is offered in a best effort manner. To better exploit tenant demand correlation, cloud brokerage services provide cloud resource multiplexing so as to earn profits by receiving volume discounts from cloud providers. A fundamental but daunting problem facing a tenant consumer is competitive resource procurements via cloud brokerage. In this paper, we investigate this problem via non-cooperative game modeling. In the static game, to maximize the experienced surplus, tenants judiciously select optimal demand responses given pricing strategies of cloud brokers and complete information of the other tenants' demands. We also derive Nash equilibrium of the non-cooperative game for competitive resource procurements. Performance evaluation on Nash equilibrium reveals insightful observations for both theoretical analysis and practical cloud resource procurements scheme design.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132778064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.24
G. Oddi, M. Panfili, A. Pietrabissa, L. Zuccaro, V. Suraci
Cloud technologies can nowadays be considered as commodities. The possibility of getting access to storage, computing and networking virtual resources empowers any business that needs dynamic IT capabilities. The Cloud Management Broker (CMB) plays a crucial role to handle heterogeneous virtualized cloud resources in order to offer a unique set of interfaces to the cloud users. Moreover, the CMB is in charge of optimizing the usage of the cloud resources, satisfying the requirements declared by the users. This paper proposes a novel multi-cloud resource allocation algorithm, based on a Markov Decision Process (MDP), capable of dynamically assigning the resources requests to a set of IT resources (storage or computing resources), with the aim of maximizing the expected CMB revenue. Simulation results show the feasibility and the higher performances obtained by the proposed algorithm, compared to a greedy approach.
{"title":"A Resource Allocation Algorithm of Multi-cloud Resources Based on Markov Decision Process","authors":"G. Oddi, M. Panfili, A. Pietrabissa, L. Zuccaro, V. Suraci","doi":"10.1109/CloudCom.2013.24","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.24","url":null,"abstract":"Cloud technologies can nowadays be considered as commodities. The possibility of getting access to storage, computing and networking virtual resources empowers any business that needs dynamic IT capabilities. The Cloud Management Broker (CMB) plays a crucial role to handle heterogeneous virtualized cloud resources in order to offer a unique set of interfaces to the cloud users. Moreover, the CMB is in charge of optimizing the usage of the cloud resources, satisfying the requirements declared by the users. This paper proposes a novel multi-cloud resource allocation algorithm, based on a Markov Decision Process (MDP), capable of dynamically assigning the resources requests to a set of IT resources (storage or computing resources), with the aim of maximizing the expected CMB revenue. Simulation results show the feasibility and the higher performances obtained by the proposed algorithm, compared to a greedy approach.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114859767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.42
Jingui Li, Xuelian Lin, Xiaolong Cui, Yue Ye
As an efficient parallel computing system based on MapReduce model, Hadoop is widely used for large-scale data analysis such as data mining, machine learning and scientific simulation. However, there are still some performance problems in MapReduce, especially the situation in the shuffle phase. In order to solve these problems, in this paper, a lightweight individual shuffle service component with more efficient I/O policy was proposed rather than the existing shuffle phase in MapReduce. We also describe how to implement the shuffle service in three steps: extract shuffle from reduce task as a shuffle task, reconstruct the shuffle task as a service and improve I/O scheduling policy on Map sides. Furthermore both simulated experiments and MapReduce job comparative studies are conducted to evaluate the performance of our improvements. The result reveals that our approach can decrease the whole job's execution time and make full use of cluster resources.
{"title":"Improving the Shuffle of Hadoop MapReduce","authors":"Jingui Li, Xuelian Lin, Xiaolong Cui, Yue Ye","doi":"10.1109/CloudCom.2013.42","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.42","url":null,"abstract":"As an efficient parallel computing system based on MapReduce model, Hadoop is widely used for large-scale data analysis such as data mining, machine learning and scientific simulation. However, there are still some performance problems in MapReduce, especially the situation in the shuffle phase. In order to solve these problems, in this paper, a lightweight individual shuffle service component with more efficient I/O policy was proposed rather than the existing shuffle phase in MapReduce. We also describe how to implement the shuffle service in three steps: extract shuffle from reduce task as a shuffle task, reconstruct the shuffle task as a service and improve I/O scheduling policy on Map sides. Furthermore both simulated experiments and MapReduce job comparative studies are conducted to evaluate the performance of our improvements. The result reveals that our approach can decrease the whole job's execution time and make full use of cluster resources.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116311920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CLOUDCOM.2013.160
Y. Demchenko, D. Bernstein, A. Belloum, Ana Oprescu, T. Wlodarczyk, C. D. Laat
This paper presents ongoing work to develop advanced education and training course on the Cloud Computing technologies foundation and engineering by a cooperating group of universities and the professional education partners. The central part of proposed approach is the Common Body of Knowledge in Cloud Computing (CBK-CC) that defines the professional level of knowledge in the selected domain and allows consistent curricula structuring and profiling. The paper presents the structure of the course and explains the principles used for developing course materials, such as Bloom's Taxonomy applied for technical education, and andragogy instructional model for professional education and training. The paper explains the importance of using the strong technical foundation to build the course materials that can address interests of different categories of stakeholders and roles/responsibilities in the Cloud Computing services provisioning and operation. The paper provides a short description of summary of the used Cloud Computing related architecture concepts and models that allow consistent mapping between CBK-CC, stakeholder roles/responsibilities and required skills, explaining also importance of the requirements engineering stage that provides a context for cloud based services design. The paper refers to the ongoing development of the educational course on Cloud Computing at the University of Amsterdam, University of Stavanger and provides suggestions for building advanced online training course for IT professionals.
{"title":"New Instructional Models for Building Effective Curricula on Cloud Computing Technologies and Engineering","authors":"Y. Demchenko, D. Bernstein, A. Belloum, Ana Oprescu, T. Wlodarczyk, C. D. Laat","doi":"10.1109/CLOUDCOM.2013.160","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2013.160","url":null,"abstract":"This paper presents ongoing work to develop advanced education and training course on the Cloud Computing technologies foundation and engineering by a cooperating group of universities and the professional education partners. The central part of proposed approach is the Common Body of Knowledge in Cloud Computing (CBK-CC) that defines the professional level of knowledge in the selected domain and allows consistent curricula structuring and profiling. The paper presents the structure of the course and explains the principles used for developing course materials, such as Bloom's Taxonomy applied for technical education, and andragogy instructional model for professional education and training. The paper explains the importance of using the strong technical foundation to build the course materials that can address interests of different categories of stakeholders and roles/responsibilities in the Cloud Computing services provisioning and operation. The paper provides a short description of summary of the used Cloud Computing related architecture concepts and models that allow consistent mapping between CBK-CC, stakeholder roles/responsibilities and required skills, explaining also importance of the requirements engineering stage that provides a context for cloud based services design. The paper refers to the ongoing development of the educational course on Cloud Computing at the University of Amsterdam, University of Stavanger and provides suggestions for building advanced online training course for IT professionals.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116478202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.46
Ankit Anand, J. Lakshmi, S. Nandy
Cloud computing model separates usage from ownership in terms of control on resource provisioning. Resources in the cloud are projected as a service and are realized using various service models like IaaS, PaaS and SaaS. In IaaS model, end users get to use a VM whose capacity they can specify but not the placement on a specific host or with which other VMs it can be co-hosted. Typically, the placement decisions happen based on the goals like minimizing the number of physical hosts to support a given set of VMs by satisfying each VMs capacity requirement. However, the role of the VMM usage to support I/O specific workloads inside a VM can make this capacity requirement incomplete. I/O workloads inside VMs require substantial VMM CPU cycles to support their performance. As a result, placement algorithms need to include the VMM's usage on a per VM basis. Secondly, cloud centers encounter situations wherein change in existing VM's capacity or launching of new VMs need to be considered during different placement intervals. Usually, this change is handled by migrating existing VMs to meet the goal of optimal placement. We argue that VM migration is not a trivial task and does include loss of performance during migration. We quantify this migration overhead based on the VM's workload type and include the same in placement problem. One of the goals of the placement algorithm is to reduce the VM's migration prospects, thereby reducing chances of performance loss during migration. This paper evaluates the existing ILP and First Fit Decreasing (FFD) algorithms to consider these constraints to arrive at placement decisions. We observe that ILP algorithm yields optimal results but needs long computing time even with parallel version. However, FFD heuristics are much faster and scalable algorithms that generate a sub-optimal solution, as compared to ILP, but in time-scales that are useful in real-time decision making. We also observe that including VM migration overheads in the placement algorithm results in a marginal increase in the number of physical hosts but a significant, of about 84 percent reduction in VM migration.
{"title":"Virtual Machine Placement Optimization Supporting Performance SLAs","authors":"Ankit Anand, J. Lakshmi, S. Nandy","doi":"10.1109/CloudCom.2013.46","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.46","url":null,"abstract":"Cloud computing model separates usage from ownership in terms of control on resource provisioning. Resources in the cloud are projected as a service and are realized using various service models like IaaS, PaaS and SaaS. In IaaS model, end users get to use a VM whose capacity they can specify but not the placement on a specific host or with which other VMs it can be co-hosted. Typically, the placement decisions happen based on the goals like minimizing the number of physical hosts to support a given set of VMs by satisfying each VMs capacity requirement. However, the role of the VMM usage to support I/O specific workloads inside a VM can make this capacity requirement incomplete. I/O workloads inside VMs require substantial VMM CPU cycles to support their performance. As a result, placement algorithms need to include the VMM's usage on a per VM basis. Secondly, cloud centers encounter situations wherein change in existing VM's capacity or launching of new VMs need to be considered during different placement intervals. Usually, this change is handled by migrating existing VMs to meet the goal of optimal placement. We argue that VM migration is not a trivial task and does include loss of performance during migration. We quantify this migration overhead based on the VM's workload type and include the same in placement problem. One of the goals of the placement algorithm is to reduce the VM's migration prospects, thereby reducing chances of performance loss during migration. This paper evaluates the existing ILP and First Fit Decreasing (FFD) algorithms to consider these constraints to arrive at placement decisions. We observe that ILP algorithm yields optimal results but needs long computing time even with parallel version. However, FFD heuristics are much faster and scalable algorithms that generate a sub-optimal solution, as compared to ILP, but in time-scales that are useful in real-time decision making. We also observe that including VM migration overheads in the placement algorithm results in a marginal increase in the number of physical hosts but a significant, of about 84 percent reduction in VM migration.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116828929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although Cloud computing emerged for business applications in industry, public Cloud services have been widely accepted and encouraged for scientific computing in academia. The recently available Google Compute Engine (GCE) is claimed to support high-performance and computationally intensive tasks, while little evaluation studies can be found to reveal GCE's scientific capabilities. Considering that fundamental performance benchmarking is the strategy of early-stage evaluation of new Cloud services, we followed the Cloud Evaluation Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon EC2, to help understand the elementary capability of GCE for dealing with scientific problems. The experimental results and analyses show both potential advantages of, and possible threats to applying GCE to scientific computing. For example, compared to Amazon's EC2 service, GCE may better suit applications that require frequent disk operations, while it may not be ready yet for single VM-based parallel computing. Following the same evaluation methodology, different evaluators can replicate and/or supplement this fundamental evaluation of GCE. Based on the fundamental evaluation results, suitable GCE environments can be further established for case studies of solving real science problems.
{"title":"Early Observations on Performance of Google Compute Engine for Scientific Computing","authors":"Zheng Li, L. O'Brien, R. Ranjan, Miranda Zhang","doi":"10.1109/CloudCom.2013.7","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.7","url":null,"abstract":"Although Cloud computing emerged for business applications in industry, public Cloud services have been widely accepted and encouraged for scientific computing in academia. The recently available Google Compute Engine (GCE) is claimed to support high-performance and computationally intensive tasks, while little evaluation studies can be found to reveal GCE's scientific capabilities. Considering that fundamental performance benchmarking is the strategy of early-stage evaluation of new Cloud services, we followed the Cloud Evaluation Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon EC2, to help understand the elementary capability of GCE for dealing with scientific problems. The experimental results and analyses show both potential advantages of, and possible threats to applying GCE to scientific computing. For example, compared to Amazon's EC2 service, GCE may better suit applications that require frequent disk operations, while it may not be ready yet for single VM-based parallel computing. Following the same evaluation methodology, different evaluators can replicate and/or supplement this fundamental evaluation of GCE. Based on the fundamental evaluation results, suitable GCE environments can be further established for case studies of solving real science problems.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124767504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}