As cloud services need a fair pricing for both service providers and customers. If the price is too high, the customer may not use it, if the price is too low, service providers have less incentive to develop services. This paper proposes a novel pricing framework for cloud services using game theory (Cournot Duopoly, Cartel, and Stackelberg models) and data mining techniques (clustering and classification, e.g., SVM (Support Vector Machine)) to determine optimal prices for cloud services. The framework is dynamic because the price is determined based on recent usage data and available resources, it is also intelligent as it takes into various economic models into consideration, it is benign because it considers two conflicting parties, service providers and consumers, into consideration at the same time, and it is customizable based on various pricing strategies proposed by service providers and usage patterns as exhibited by consumers. Linear regression is used in various game theory models to determine the optimal price. A global pricing union (GPU) framework is proposed to achieve the best practice of game theory models. Based on the proposed technique, this paper applies this pricing framework to a case study in cloud services, and demonstrates that the prices obtained meet the requirement of traditional supply-demand analysis. In other words, the price obtained is good enough.
{"title":"DICB: Dynamic Intelligent Customizable Benign Pricing Strategy for Cloud Computing","authors":"W. Tsai, Guanqiu Qi","doi":"10.1109/CLOUD.2012.49","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.49","url":null,"abstract":"As cloud services need a fair pricing for both service providers and customers. If the price is too high, the customer may not use it, if the price is too low, service providers have less incentive to develop services. This paper proposes a novel pricing framework for cloud services using game theory (Cournot Duopoly, Cartel, and Stackelberg models) and data mining techniques (clustering and classification, e.g., SVM (Support Vector Machine)) to determine optimal prices for cloud services. The framework is dynamic because the price is determined based on recent usage data and available resources, it is also intelligent as it takes into various economic models into consideration, it is benign because it considers two conflicting parties, service providers and consumers, into consideration at the same time, and it is customizable based on various pricing strategies proposed by service providers and usage patterns as exhibited by consumers. Linear regression is used in various game theory models to determine the optimal price. A global pricing union (GPU) framework is proposed to achieve the best practice of game theory models. Based on the proposed technique, this paper applies this pricing framework to a case study in cloud services, and demonstrates that the prices obtained meet the requirement of traditional supply-demand analysis. In other words, the price obtained is good enough.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133725273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many business-oriented services will be gradually offered in the Cloud. Java Message Service (JMS) is a critical messaging technology in Java-based business applications, particularly to those that are based on the Java Enterprise Edition (Java EE) open standard. Maintaining high performance in the horizontally scaled, and elastic, cloud environment is critical to the success of the business applications. In this paper, we present practical considerations in optimizing JMS performance for the cloud deployment, where some of the findings may also serve to improve the design of JMS container so it adapts well to cloud computing. Our work also includes performance evaluation on the proposed strategies.
许多面向业务的服务将逐步在云中提供。Java Message Service (JMS)是基于Java的业务应用程序中的关键消息传递技术,特别是对于那些基于Java Enterprise Edition (Java EE)开放标准的业务应用程序。在水平伸缩的弹性云环境中保持高性能对于业务应用程序的成功至关重要。在本文中,我们提出了为云部署优化JMS性能的实际考虑,其中的一些发现也可能有助于改进JMS容器的设计,使其能够很好地适应云计算。我们的工作还包括对拟议战略的绩效评估。
{"title":"Optimizing JMS Performance for Cloud-Based Application Servers","authors":"Zhenyun Zhuang, Yao-Min Chen","doi":"10.1109/CLOUD.2012.136","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.136","url":null,"abstract":"Many business-oriented services will be gradually offered in the Cloud. Java Message Service (JMS) is a critical messaging technology in Java-based business applications, particularly to those that are based on the Java Enterprise Edition (Java EE) open standard. Maintaining high performance in the horizontally scaled, and elastic, cloud environment is critical to the success of the business applications. In this paper, we present practical considerations in optimizing JMS performance for the cloud deployment, where some of the findings may also serve to improve the design of JMS container so it adapts well to cloud computing. Our work also includes performance evaluation on the proposed strategies.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114150419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grace Metri, S. Srinivasaraghavan, Weisong Shi, M. Brockmeyer
Energy efficiency is an important issue for data centers given the amount of energy they consume yearly. However, there is still a gap of understanding of how exactly the application type and the heterogeneity of servers and their configuration impact the energy efficiency of data centers. To this end, we introduce the notion of Application Specific Energy Efficiency (ASEE) in order to rank energy efficiency of heterogeneous servers based on the hosted applications. We conducted extensive sets of experiments using three benchmarks: TPC-W, BS Seeker, and Matrix Stress mark. We observed that each server has different ASEE value based on the type of application running, the size of the virtual machine, the application load, and the scalability factor. In some cases, we witnessed 70% of ASEE improvement by changing the virtual machine size within the same node while keeping an identical load. In different cases, we witnessed up to 86% of ASEE improvement by running the same application with the same load within the same size of virtual machine but on different nodes. Our observation has many implications which include but are not limited to improving virtual machine scheduling based on the ASEE rank of the node. Another implication stresses on the importance of accurate prediction of application load and selecting the appropriate virtual machine size in order to improve the ASEE.
考虑到数据中心每年消耗的能源量,能源效率是一个重要的问题。然而,对于应用程序类型和服务器的异构性及其配置如何影响数据中心的能源效率的理解仍然存在差距。为此,我们引入了应用特定能源效率(Application Specific Energy Efficiency, ASEE)的概念,以便根据托管的应用程序对异构服务器的能源效率进行排名。我们使用三个基准进行了大量的实验:TPC-W、BS Seeker和Matrix Stress mark。我们观察到,根据运行的应用程序类型、虚拟机的大小、应用程序负载和可伸缩性因素,每个服务器都有不同的ASEE值。在某些情况下,通过在保持相同负载的情况下更改同一节点内的虚拟机大小,我们见证了70%的ASEE改进。在不同的情况下,通过在相同大小的虚拟机上以相同的负载运行相同的应用程序,但在不同的节点上,我们看到了高达86%的ASEE改进。我们的观察有很多意义,包括但不限于改进基于节点ASEE等级的虚拟机调度。另一个含义强调了准确预测应用程序负载和选择适当的虚拟机大小以提高ASEE的重要性。
{"title":"Experimental Analysis of Application Specific Energy Efficiency of Data Centers with Heterogeneous Servers","authors":"Grace Metri, S. Srinivasaraghavan, Weisong Shi, M. Brockmeyer","doi":"10.1109/CLOUD.2012.89","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.89","url":null,"abstract":"Energy efficiency is an important issue for data centers given the amount of energy they consume yearly. However, there is still a gap of understanding of how exactly the application type and the heterogeneity of servers and their configuration impact the energy efficiency of data centers. To this end, we introduce the notion of Application Specific Energy Efficiency (ASEE) in order to rank energy efficiency of heterogeneous servers based on the hosted applications. We conducted extensive sets of experiments using three benchmarks: TPC-W, BS Seeker, and Matrix Stress mark. We observed that each server has different ASEE value based on the type of application running, the size of the virtual machine, the application load, and the scalability factor. In some cases, we witnessed 70% of ASEE improvement by changing the virtual machine size within the same node while keeping an identical load. In different cases, we witnessed up to 86% of ASEE improvement by running the same application with the same load within the same size of virtual machine but on different nodes. Our observation has many implications which include but are not limited to improving virtual machine scheduling based on the ASEE rank of the node. Another implication stresses on the importance of accurate prediction of application load and selecting the appropriate virtual machine size in order to improve the ASEE.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114492323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an efficient distributed graph database architecture for large scale social computing. The architecture consists of a distributed graph data processing system and a distributed graph data storage system. We leverage the advantages of both systems to achieve efficient social computing. We conduct extensive experiments to demonstrate the performance of our system. We employ four real-world, large scale social networks - YouTube, Flicker, LiveJournal and Orkut as test data. We also implement several representative social applications and graph algorithms to examine the performance of our system. We employ two main optimization techniques in our system ¡Vindexing and graph partitioning. Experimental results indicate that our system outperforms GoldenOrb, an implementation Pregel model from Google.
{"title":"Distributed Graph Database for Large-Scale Social Computing","authors":"Li-Yung Ho, Jan-Jan Wu, Pangfeng Liu","doi":"10.1109/CLOUD.2012.33","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.33","url":null,"abstract":"We present an efficient distributed graph database architecture for large scale social computing. The architecture consists of a distributed graph data processing system and a distributed graph data storage system. We leverage the advantages of both systems to achieve efficient social computing. We conduct extensive experiments to demonstrate the performance of our system. We employ four real-world, large scale social networks - YouTube, Flicker, LiveJournal and Orkut as test data. We also implement several representative social applications and graph algorithms to examine the performance of our system. We employ two main optimization techniques in our system ¡Vindexing and graph partitioning. Experimental results indicate that our system outperforms GoldenOrb, an implementation Pregel model from Google.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"os7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128322008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuma Matsui, Aaron Gidding, T. Levy, F. Kuester, T. DeFanti
A modern field science such as archaeology is heavily data-driven using various kinds of state-of-the-art measurement instruments. It requires sophisticated computer infrastructure to manage large amounts of heterogeneous data. The concept of cloud computing provides a flexible cyber infrastructure for large-scale data management, which is being deployed at university campuses. A problem unique to field research is that researchers often work at remote field sites with limited computer and network resources. For a data management system that has to work in the campus cloud and under vastly different field conditions, portability of computer infrastructure and common data access methods are essential requirements. This paper explores the portability of cloud infrastructure and illustrates the portable data management system that we used in a recent archaeological expedition.
{"title":"Portable Data Management Cloud for Field Science","authors":"Yuma Matsui, Aaron Gidding, T. Levy, F. Kuester, T. DeFanti","doi":"10.1109/CLOUD.2012.68","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.68","url":null,"abstract":"A modern field science such as archaeology is heavily data-driven using various kinds of state-of-the-art measurement instruments. It requires sophisticated computer infrastructure to manage large amounts of heterogeneous data. The concept of cloud computing provides a flexible cyber infrastructure for large-scale data management, which is being deployed at university campuses. A problem unique to field research is that researchers often work at remote field sites with limited computer and network resources. For a data management system that has to work in the campus cloud and under vastly different field conditions, portability of computer infrastructure and common data access methods are essential requirements. This paper explores the portability of cloud infrastructure and illustrates the portable data management system that we used in a recent archaeological expedition.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Dutta, Sankalp Gera, Akshat Verma, B. Viswanathan
Enterprise clouds today support an on demand resource allocation model and can provide resources requested by applications in a near online manner using virtual machine resizing or cloning. However, in order to take advantage of an on demand resource model, enterprise applications need to be automatically scaled in a way that makes the most efficient use of resources. In this work, we present the SmartScale automated scaling framework. SmartScale uses a combination of vertical (adding more resources to existing VM instances) and horizontal (adding more VM instances) scaling to ensure that the application is scaled in a manner that optimizes both resource usage and the reconfiguration cost incurred due to scaling. The SmartScale methodology is proactive and ensures that the application converges quickly to the desired scaling level even when the workload intensity changes significantly. We evaluate SmartScale using real production traces on Olio, an emerging cloud benchmark, running on a kvm-based cloud testbed. We present both theoretical and experimental evidence that comprehensively establish the effectiveness of SmartScale.
{"title":"SmartScale: Automatic Application Scaling in Enterprise Clouds","authors":"S. Dutta, Sankalp Gera, Akshat Verma, B. Viswanathan","doi":"10.1109/CLOUD.2012.12","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.12","url":null,"abstract":"Enterprise clouds today support an on demand resource allocation model and can provide resources requested by applications in a near online manner using virtual machine resizing or cloning. However, in order to take advantage of an on demand resource model, enterprise applications need to be automatically scaled in a way that makes the most efficient use of resources. In this work, we present the SmartScale automated scaling framework. SmartScale uses a combination of vertical (adding more resources to existing VM instances) and horizontal (adding more VM instances) scaling to ensure that the application is scaled in a manner that optimizes both resource usage and the reconfiguration cost incurred due to scaling. The SmartScale methodology is proactive and ensures that the application converges quickly to the desired scaling level even when the workload intensity changes significantly. We evaluate SmartScale using real production traces on Olio, an emerging cloud benchmark, running on a kvm-based cloud testbed. We present both theoretical and experimental evidence that comprehensively establish the effectiveness of SmartScale.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121719471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is one the most relevant computing paradigms available nowadays. Its adoption has increased during last years due to the large investment and research from business enterprises and academia institutions. Among all the services cloud providers usually offer, Infrastructure as a Service has reached its momentum for solving HPC problems in a more dynamic way without the need of expensive investments. The integration of a large number of providers is a major goal as it enables the improvement of the quality of the selected resources in terms of pricing, speed, redundancy, etc. In this paper, we propose a system architecture, based on semantic solutions, to build an interoperable scheduler for federated clouds that works with several IaaS (Infrastructure as a Service) providers in a uniform way. Based on this architecture we implement a proof-of-concept prototype and test it with two different cloud solutions to provide some experimental results about the viability of our approach.
{"title":"A Semantic Scheduler Architecture for Federated Hybrid Clouds","authors":"Idafen Santana-Pérez, M. Pérez-Hernández","doi":"10.1109/CLOUD.2012.43","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.43","url":null,"abstract":"Cloud computing is one the most relevant computing paradigms available nowadays. Its adoption has increased during last years due to the large investment and research from business enterprises and academia institutions. Among all the services cloud providers usually offer, Infrastructure as a Service has reached its momentum for solving HPC problems in a more dynamic way without the need of expensive investments. The integration of a large number of providers is a major goal as it enables the improvement of the quality of the selected resources in terms of pricing, speed, redundancy, etc. In this paper, we propose a system architecture, based on semantic solutions, to build an interoperable scheduler for federated clouds that works with several IaaS (Infrastructure as a Service) providers in a uniform way. Based on this architecture we implement a proof-of-concept prototype and test it with two different cloud solutions to provide some experimental results about the viability of our approach.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122172636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud providers aim at guaranteeing Service Level Agreements (SLAs) in a resource-efficient way. This, amongst others, means that resources of virtual (VMs) and physical machines (PMs) have to be autonomically allocated responding to external influences as workload or environmental changes. Thereby, workload volatility (WV) is one of the crucial factors that influence the quality of suggested allocations. In this paper we devise a novel approach for self-adaptive and resource-efficient decision-making considering the three conflicting goals of minimizing the number of SLA violations, maximizing resource utilization, and minimizing the number of necessary time- and energy-consuming reconfiguration actions. We propose self-adaptive rule-based knowledge management for autonomic VM reconfiguration considering the rapidness of changes in the workload, i.e., WV. We introduce a novel WV categorization and present cost and volatility based methods for self-tuning. We evaluate these methods by a large variety of synthetically generated workloads, and by real-world measurements gathered from an image rendering application and a scientific workflow for RNA sequencing. Evaluation shows that in most cases the self-adaptive approach outperforms the static approach.
{"title":"Self-Adaptive and Resource-Efficient SLA Enactment for Cloud Computing Infrastructures","authors":"M. Maurer, I. Brandić, R. Sakellariou","doi":"10.1109/CLOUD.2012.55","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.55","url":null,"abstract":"Cloud providers aim at guaranteeing Service Level Agreements (SLAs) in a resource-efficient way. This, amongst others, means that resources of virtual (VMs) and physical machines (PMs) have to be autonomically allocated responding to external influences as workload or environmental changes. Thereby, workload volatility (WV) is one of the crucial factors that influence the quality of suggested allocations. In this paper we devise a novel approach for self-adaptive and resource-efficient decision-making considering the three conflicting goals of minimizing the number of SLA violations, maximizing resource utilization, and minimizing the number of necessary time- and energy-consuming reconfiguration actions. We propose self-adaptive rule-based knowledge management for autonomic VM reconfiguration considering the rapidness of changes in the workload, i.e., WV. We introduce a novel WV categorization and present cost and volatility based methods for self-tuning. We evaluate these methods by a large variety of synthetically generated workloads, and by real-world measurements gathered from an image rendering application and a scientific workflow for RNA sequencing. Evaluation shows that in most cases the self-adaptive approach outperforms the static approach.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125564991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Zhang, Hong Tang, Hao Jiang, Tao Yang, Xiaogang Li, Yue Zeng
In a virtualized cloud computing environment, frequent snapshot backup of virtual disks improves hosting reliability but storage demand of such operations is huge. While dirty bit-based technique can identify unmodified data between versions, full deduplication with fingerprint comparison can remove more redundant content at the cost of computing resources. This paper presents a multi-level selective deduplication scheme which integrates inner-VM and cross-VM duplicate elimination under a stringent resource requirement. This scheme uses popular common data to facilitate fingerprint comparison while reducing the cost and it strikes a balance between local and global deduplication to increase parallelism and improve reliability. Experimental results show the proposed scheme can achieve high deduplication ratio while using a small amount of cloud resources.
{"title":"Multi-level Selective Deduplication for VM Snapshots in Cloud Storage","authors":"Wei Zhang, Hong Tang, Hao Jiang, Tao Yang, Xiaogang Li, Yue Zeng","doi":"10.1109/CLOUD.2012.78","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.78","url":null,"abstract":"In a virtualized cloud computing environment, frequent snapshot backup of virtual disks improves hosting reliability but storage demand of such operations is huge. While dirty bit-based technique can identify unmodified data between versions, full deduplication with fingerprint comparison can remove more redundant content at the cost of computing resources. This paper presents a multi-level selective deduplication scheme which integrates inner-VM and cross-VM duplicate elimination under a stringent resource requirement. This scheme uses popular common data to facilitate fingerprint comparison while reducing the cost and it strikes a balance between local and global deduplication to increase parallelism and improve reliability. Experimental results show the proposed scheme can achieve high deduplication ratio while using a small amount of cloud resources.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124158901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many cloud services nowadays are running on top of geographically distributed infrastructures for better reliability and performance. They need an effective way to direct the user requests to a suitable data center, depending on factors including performance, cost, etc. Previous work focused on efficiency and invariably considered the simple objective of maximizing aggregated utility. These approaches favor users closer to the infrastructure. In this paper, we argue that fairness should be considered to ensure users at disadvantageous locations also enjoy reasonable performance, and performance is balanced across the entire system. We adopt a general fairness criterion based on Nash bargaining solutions, and present a general optimization framework that models the realistic environment and practical constraints that a cloud faces. We develop an efficient distributed algorithm based on dual decomposition and the sub gradient method, and evaluate its effectiveness and practicality using real-world traffic traces and electricity prices.
{"title":"A General and Practical Datacenter Selection Framework for Cloud Services","authors":"Hong Xu, Baochun Li","doi":"10.1109/CLOUD.2012.16","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.16","url":null,"abstract":"Many cloud services nowadays are running on top of geographically distributed infrastructures for better reliability and performance. They need an effective way to direct the user requests to a suitable data center, depending on factors including performance, cost, etc. Previous work focused on efficiency and invariably considered the simple objective of maximizing aggregated utility. These approaches favor users closer to the infrastructure. In this paper, we argue that fairness should be considered to ensure users at disadvantageous locations also enjoy reasonable performance, and performance is balanced across the entire system. We adopt a general fairness criterion based on Nash bargaining solutions, and present a general optimization framework that models the realistic environment and practical constraints that a cloud faces. We develop an efficient distributed algorithm based on dual decomposition and the sub gradient method, and evaluate its effectiveness and practicality using real-world traffic traces and electricity prices.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130045320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}