首页 > 最新文献

2014 IEEE 7th International Conference on Cloud Computing最新文献

英文 中文
A virtual machine placement algorithm for balanced resource utilization in cloud data centers 一种云数据中心资源均衡利用的虚拟机布局算法
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.70
N. Hieu, M. D. Francesco, Antti Ylä-Jääski
Virtual machine (VM) placement is the process of selecting the most suitable server in large cloud data centers to deploy newly-created VMs. Several approaches have been proposed to find a solution to this problem. However, most of the existing solutions only consider a limited number of resource types, thus resulting in unbalanced load or in the unnecessary activation of physical servers. In this article, we propose an algorithm, called Max-BRU, that maximizes the resource utilization and balances the usage of resources across multiple dimensions. Our algorithm is based on multiple resource-constraint metrics that help to find the most suitable server for deploying VMs in large cloud data centers. The proposed Max-BRU algorithm is evaluated by simulations based on synthetic datasets. Experimental results show two major improvements over the existing approaches for VM placement. First, Max-BRU increases the resource utilization by minimizing the amount of physical servers used. Second, Max-BRU effectively balances the utilization of multiple types of resources.
虚拟机布局是在大型云数据中心中选择最合适的服务器来部署新创建的虚拟机的过程。已经提出了几种方法来解决这个问题。但是,大多数现有解决方案只考虑有限数量的资源类型,从而导致负载不平衡或不必要地激活物理服务器。在本文中,我们提出了一种称为Max-BRU的算法,该算法最大限度地提高资源利用率,并在多个维度上平衡资源的使用。我们的算法基于多个资源约束指标,这些指标有助于找到最适合在大型云数据中心部署vm的服务器。通过基于合成数据集的仿真对所提出的Max-BRU算法进行了评价。实验结果表明,与现有的虚拟机放置方法相比,该方法有两大改进。首先,Max-BRU通过最小化使用的物理服务器数量来提高资源利用率。其次,Max-BRU有效地平衡了多种资源的利用。
{"title":"A virtual machine placement algorithm for balanced resource utilization in cloud data centers","authors":"N. Hieu, M. D. Francesco, Antti Ylä-Jääski","doi":"10.1109/CLOUD.2014.70","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.70","url":null,"abstract":"Virtual machine (VM) placement is the process of selecting the most suitable server in large cloud data centers to deploy newly-created VMs. Several approaches have been proposed to find a solution to this problem. However, most of the existing solutions only consider a limited number of resource types, thus resulting in unbalanced load or in the unnecessary activation of physical servers. In this article, we propose an algorithm, called Max-BRU, that maximizes the resource utilization and balances the usage of resources across multiple dimensions. Our algorithm is based on multiple resource-constraint metrics that help to find the most suitable server for deploying VMs in large cloud data centers. The proposed Max-BRU algorithm is evaluated by simulations based on synthetic datasets. Experimental results show two major improvements over the existing approaches for VM placement. First, Max-BRU increases the resource utilization by minimizing the amount of physical servers used. Second, Max-BRU effectively balances the utilization of multiple types of resources.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128709493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Adaptive Market Mechanism for Efficient Cloud Services Trading 高效云服务交易的自适应市场机制
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.99
S. Chichin, Quoc Bao Vo, R. Kowalczyk
Cloud resource allocation and pricing is a significant and challenging problem for modern cloud providers, which needs to be addressed. In this work, we propose an adaptive greedy mechanism, which is a new type of greedy market mechanism for efficient cloud resource allocation. The mechanism is combinatorial and it is designed to be operated by a single cloud provider. We prove that our proposed market mechanism is truthful, i.e. the buyers do not have an incentive to lie about their true valuation for the resource. Our experimental investigation showed that the proposed mechanism outperforms the conventional (single-shot) approach for solving combinatorial auction in terms of generated social welfare and resource utilization.
云资源分配和定价是现代云提供商需要解决的一个重要且具有挑战性的问题。本文提出了一种自适应贪婪机制,这是一种新型的云资源高效配置的贪婪市场机制。该机制是组合的,它被设计为由单个云提供商操作。我们证明了我们提出的市场机制是真实的,即购买者没有动机对资源的真实估值撒谎。我们的实验研究表明,就产生的社会福利和资源利用而言,所提出的机制优于解决组合拍卖的传统(单次)方法。
{"title":"Adaptive Market Mechanism for Efficient Cloud Services Trading","authors":"S. Chichin, Quoc Bao Vo, R. Kowalczyk","doi":"10.1109/CLOUD.2014.99","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.99","url":null,"abstract":"Cloud resource allocation and pricing is a significant and challenging problem for modern cloud providers, which needs to be addressed. In this work, we propose an adaptive greedy mechanism, which is a new type of greedy market mechanism for efficient cloud resource allocation. The mechanism is combinatorial and it is designed to be operated by a single cloud provider. We prove that our proposed market mechanism is truthful, i.e. the buyers do not have an incentive to lie about their true valuation for the resource. Our experimental investigation showed that the proposed mechanism outperforms the conventional (single-shot) approach for solving combinatorial auction in terms of generated social welfare and resource utilization.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132391246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
AppCloak: Rapid Migration of Legacy Applications into Cloud AppCloak:将遗留应用程序快速迁移到云端
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.112
Byungchul Tak, Chunqiang Tang
Although cloud has been adopted by many organizations as their main infrastructure for IT delivery, there are still a large number of legacy applications running in non-cloud hosting environments. Thus, it is crucial to have migration techniques for such legacy applications so that they can benefit from many advantages of cloud such as elasticity, low upfront investment, and fast time-to-market. However, migrating large number of legacy applications into cloud in a timely manner is a daunting task. Common techniques such as redeveloping (i.e., modernizing) them or reinstalling from the scratch entails high costs. To mitigate these problems, we have developed a rapid migration technique, called AppCloak, that allows users to literally copy an already-installed application to cloud and run it without any modifications. The technique is based on intercepting a selected set of system calls and replacing the parameters and return values to hide any differences of environments to the application. We demonstrate that our technique works in Amazon EC2 and quantify the performance overhead.
尽管许多组织已经采用云作为其IT交付的主要基础设施,但仍有大量遗留应用程序在非云托管环境中运行。因此,为这些遗留应用程序提供迁移技术是至关重要的,这样它们就可以从云的许多优势中获益,例如弹性、低前期投资和快速上市。然而,及时地将大量遗留应用程序迁移到云中是一项艰巨的任务。诸如重新开发(即现代化)它们或从头重新安装之类的常见技术需要高昂的成本。为了缓解这些问题,我们开发了一种称为AppCloak的快速迁移技术,它允许用户直接将已经安装的应用程序复制到云中,并在没有任何修改的情况下运行它。该技术基于拦截选定的一组系统调用并替换参数和返回值,以隐藏应用程序环境的任何差异。我们演示了我们的技术可以在Amazon EC2中工作,并量化了性能开销。
{"title":"AppCloak: Rapid Migration of Legacy Applications into Cloud","authors":"Byungchul Tak, Chunqiang Tang","doi":"10.1109/CLOUD.2014.112","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.112","url":null,"abstract":"Although cloud has been adopted by many organizations as their main infrastructure for IT delivery, there are still a large number of legacy applications running in non-cloud hosting environments. Thus, it is crucial to have migration techniques for such legacy applications so that they can benefit from many advantages of cloud such as elasticity, low upfront investment, and fast time-to-market. However, migrating large number of legacy applications into cloud in a timely manner is a daunting task. Common techniques such as redeveloping (i.e., modernizing) them or reinstalling from the scratch entails high costs. To mitigate these problems, we have developed a rapid migration technique, called AppCloak, that allows users to literally copy an already-installed application to cloud and run it without any modifications. The technique is based on intercepting a selected set of system calls and replacing the parameters and return values to hide any differences of environments to the application. We demonstrate that our technique works in Amazon EC2 and quantify the performance overhead.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"153 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114066033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Lightweight Automatic Resource Scaling for Multi-tier Web Applications 用于多层Web应用程序的轻量级自动资源缩放
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.69
Lenar Yazdanov, C. Fetzer
Dynamic resource scaling is a key property of cloud computing. Users can acquire or release required capacity for their applications on-the-fly. The most widely used and practical approach for dynamic scaling based on predefined policies (rules). For example, IaaS providers such as RightScale asks application owners to manually set the scaling rules. This task assumes, that the user has an expertise knowledge about the application being run on the cloud. However, it is not always true. In this paper we propose a lightweight adaptive multi-tier scaling framework VscalerLight, which learns scaling policy online. Our framework performs fine-grained vertical resource scaling of multi-tier web application. We present the design and implementation of VscalerLight. We evaluate the framework against widely used RUBiS benchmark. Results show that the application under control of VscalerLight guarantees 95th percentile response time specified in SLA.
动态资源扩展是云计算的一个关键特性。用户可以动态获取或释放应用程序所需的容量。基于预定义策略(规则)的最广泛使用和最实用的动态扩展方法。例如,RightScale等IaaS提供商要求应用程序所有者手动设置缩放规则。此任务假定用户具有在云上运行的应用程序的专业知识。然而,这并不总是正确的。在本文中,我们提出了一个轻量级的自适应多层缩放框架VscalerLight,它在线学习缩放策略。我们的框架执行多层web应用程序的细粒度垂直资源扩展。本文介绍了VscalerLight的设计与实现。我们根据广泛使用的RUBiS基准来评估框架。结果表明,在VscalerLight控制下的应用程序保证了SLA规定的95百分位响应时间。
{"title":"Lightweight Automatic Resource Scaling for Multi-tier Web Applications","authors":"Lenar Yazdanov, C. Fetzer","doi":"10.1109/CLOUD.2014.69","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.69","url":null,"abstract":"Dynamic resource scaling is a key property of cloud computing. Users can acquire or release required capacity for their applications on-the-fly. The most widely used and practical approach for dynamic scaling based on predefined policies (rules). For example, IaaS providers such as RightScale asks application owners to manually set the scaling rules. This task assumes, that the user has an expertise knowledge about the application being run on the cloud. However, it is not always true. In this paper we propose a lightweight adaptive multi-tier scaling framework VscalerLight, which learns scaling policy online. Our framework performs fine-grained vertical resource scaling of multi-tier web application. We present the design and implementation of VscalerLight. We evaluate the framework against widely used RUBiS benchmark. Results show that the application under control of VscalerLight guarantees 95th percentile response time specified in SLA.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121651167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
UniCache: Hypervisor Managed Data Storage in RAM and Flash UniCache: Hypervisor管理的数据存储在RAM和Flash中
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.38
Jinho Hwang, Wei Zhang, R. C. Chiang, Timothy Wood, H. Howie Huang
Application and OS-level caches are crucial for hiding I/O latency and improving application performance. However, caches are designed to greedily consume memory, which can cause memory-hogging problems in a virtualized data centers since the hypervisor cannot tell for what a virtual machine uses its memory. A group of virtual machines may contain a wide range of caches: database query pools, memcached key-value stores, disk caches, etc., each of which would like as much memory as possible. The relative importance of these caches can vary significantly, yet system administrators currently have no easy way to dynamically manage the resources assigned to a range of virtual machine data caches in a unified way. To improve this situation, we have developed UniCache, a system that provides a hypervisor managed volatile data store that can cache data either in hypervisor controlled main memory (hot data) or on Flash based storage (cold data). We propose a two-level cache management system that uses a combination of recency information, object size, and a prediction of the cost to recover an object to guide its eviction algorithm. We have built a prototype of UniCache using Xen, and have evaluated its effectiveness in a shared environment where multiple virtual machines compete for storage resources.
应用程序和操作系统级缓存对于隐藏I/O延迟和提高应用程序性能至关重要。然而,缓存被设计成贪婪地消耗内存,这可能导致虚拟化数据中心中的内存占用问题,因为管理程序无法判断虚拟机使用其内存的目的。一组虚拟机可能包含各种各样的缓存:数据库查询池、memcached键值存储、磁盘缓存等,每个缓存都需要尽可能多的内存。这些缓存的相对重要性可能会有很大差异,但是系统管理员目前还没有一种简单的方法来以统一的方式动态管理分配给一系列虚拟机数据缓存的资源。为了改善这种情况,我们开发了UniCache,这是一个提供虚拟机监控程序管理的易失性数据存储的系统,它可以在虚拟机监控程序控制的主存储器(热数据)或基于Flash的存储(冷数据)中缓存数据。我们提出了一个两级缓存管理系统,该系统结合了最近信息、对象大小和恢复对象的成本预测来指导其清除算法。我们使用Xen构建了UniCache的原型,并在多个虚拟机竞争存储资源的共享环境中评估了它的有效性。
{"title":"UniCache: Hypervisor Managed Data Storage in RAM and Flash","authors":"Jinho Hwang, Wei Zhang, R. C. Chiang, Timothy Wood, H. Howie Huang","doi":"10.1109/CLOUD.2014.38","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.38","url":null,"abstract":"Application and OS-level caches are crucial for hiding I/O latency and improving application performance. However, caches are designed to greedily consume memory, which can cause memory-hogging problems in a virtualized data centers since the hypervisor cannot tell for what a virtual machine uses its memory. A group of virtual machines may contain a wide range of caches: database query pools, memcached key-value stores, disk caches, etc., each of which would like as much memory as possible. The relative importance of these caches can vary significantly, yet system administrators currently have no easy way to dynamically manage the resources assigned to a range of virtual machine data caches in a unified way. To improve this situation, we have developed UniCache, a system that provides a hypervisor managed volatile data store that can cache data either in hypervisor controlled main memory (hot data) or on Flash based storage (cold data). We propose a two-level cache management system that uses a combination of recency information, object size, and a prediction of the cost to recover an object to guide its eviction algorithm. We have built a prototype of UniCache using Xen, and have evaluated its effectiveness in a shared environment where multiple virtual machines compete for storage resources.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123781092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Competitive Scalability Approach for Cloud Architectures 云架构的竞争性可伸缩性方法
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.87
C. Ardagna, E. Damiani, Fulvio Frati, Guido Montalbano, Davide Rebeccani, M. Ughetti
The success of cloud computing has radically changed the way in which services are implemented and deployed, and made accessible to external and remote users. The cloud computing paradigm, in fact, supports a vision of distributed IT where software services and applications are outsourced and used on a pay-as-you-go basis. In this context, the ability to guarantee an effective management of cloud performance and to support automatic scalability become fundamental requirements. Cloud users are increasingly interested in a transparent and coherent vision of cloud, where performance is guaranteed in different scenarios, and under different and heterogeneous loads. In this paper, we analyze the benefits of an integrated scalability approach at different layers of the cloud stack, focusing on the computing infrastructure and database layers. To this aim, we provide different performance metrics and a set of rules based on them to evaluate the status of the cloud stack and scale it on demand to maintain stable performance. We then implement a proof-of-concept architecture to experimentally analyze cloud performance in three scenarios of scalability: computing infrastructure only, database only, and the case in which computing infrastructure and database compete for resources.
云计算的成功从根本上改变了服务的实现和部署方式,并使外部和远程用户可以访问服务。实际上,云计算范式支持分布式IT的愿景,在这种愿景中,软件服务和应用程序被外包,并在按需付费的基础上使用。在这种情况下,保证有效管理云性能和支持自动可伸缩性的能力成为基本需求。云用户对透明和一致的云远景越来越感兴趣,在不同的场景和不同的异构负载下保证性能。在本文中,我们分析了集成可伸缩性方法在云堆栈的不同层的好处,重点是计算基础设施和数据库层。为此,我们提供了不同的性能指标和一组基于这些指标的规则来评估云堆栈的状态,并按需扩展以保持稳定的性能。然后,我们实现了一个概念验证架构,在三种可伸缩性场景下实验性地分析云性能:仅计算基础设施、仅数据库以及计算基础设施和数据库竞争资源的情况。
{"title":"A Competitive Scalability Approach for Cloud Architectures","authors":"C. Ardagna, E. Damiani, Fulvio Frati, Guido Montalbano, Davide Rebeccani, M. Ughetti","doi":"10.1109/CLOUD.2014.87","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.87","url":null,"abstract":"The success of cloud computing has radically changed the way in which services are implemented and deployed, and made accessible to external and remote users. The cloud computing paradigm, in fact, supports a vision of distributed IT where software services and applications are outsourced and used on a pay-as-you-go basis. In this context, the ability to guarantee an effective management of cloud performance and to support automatic scalability become fundamental requirements. Cloud users are increasingly interested in a transparent and coherent vision of cloud, where performance is guaranteed in different scenarios, and under different and heterogeneous loads. In this paper, we analyze the benefits of an integrated scalability approach at different layers of the cloud stack, focusing on the computing infrastructure and database layers. To this aim, we provide different performance metrics and a set of rules based on them to evaluate the status of the cloud stack and scale it on demand to maintain stable performance. We then implement a proof-of-concept architecture to experimentally analyze cloud performance in three scenarios of scalability: computing infrastructure only, database only, and the case in which computing infrastructure and database compete for resources.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122919615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Minimizing WAN Communications in Inter-datacenter Key-Value Stores 最小化数据中心间键值存储中的广域网通信
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.72
H. Horie, M. Asahara, H. Yamada, K. Kono
Cloud-federations have emerged as popular platforms for Internet-scale services. Cloud-federations are running over multiple datacenters, because a cloud-federation is an aggregate of cloud services each of which runs in a single datacenter. In such inter-datacenter environments, distributed key-value stores (DKVSs) are attractive databases in terms of scalability. However, inter-datacenter communications degrade the performance of these DKVSs because of their large latency and narrow bandwidth. In this paper, we demonstrate how to reduce and hide the weak points of inter-datacenter communications for DKVSs. To solve the problems we introduce two techniques called multi-layered DHT (ML-DHT) and local-first data rebuilding (LDR). ML-DHT provides a global and consistent index of key-value pairs with the efficient expandability of the storage capacity. It employs a routing protocol which reduces routing hops that pass through interdatacenter connections. LDR reduces data transfer on interdatacenter connections by using erasure coding techniques. It enables KVS administrators to flexibly make trade-offs between expandability of storage capacity and the performance of data transfer. Experimental results demonstrate that our techniques improve the latency up to 74 % compared with a Chord-based system and enable us to balance the amount of storage usage and remote data transfer.
云联盟已经成为互联网规模服务的流行平台。云联合在多个数据中心上运行,因为云联合是在单个数据中心中运行的云服务的集合。在这种数据中心间环境中,分布式键值存储(dkv)在可伸缩性方面是很有吸引力的数据库。但是,数据中心间通信会降低这些dkvs的性能,因为它们的延迟大,带宽窄。在本文中,我们演示了如何减少和隐藏dkvs的数据中心间通信的弱点。为了解决这些问题,我们引入了两种技术,即多层DHT (ML-DHT)和本地优先数据重建(LDR)。ML-DHT提供全局一致的键值对索引,具有存储容量的高效可扩展性。它采用了一种路由协议,减少了通过数据中心间连接的路由跳数。LDR通过使用擦除编码技术减少数据中心间连接上的数据传输。它使KVS管理员能够灵活地在存储容量的可扩展性和数据传输性能之间进行权衡。实验结果表明,与基于chord的系统相比,我们的技术将延迟提高了74%,并使我们能够平衡存储使用量和远程数据传输。
{"title":"Minimizing WAN Communications in Inter-datacenter Key-Value Stores","authors":"H. Horie, M. Asahara, H. Yamada, K. Kono","doi":"10.1109/CLOUD.2014.72","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.72","url":null,"abstract":"Cloud-federations have emerged as popular platforms for Internet-scale services. Cloud-federations are running over multiple datacenters, because a cloud-federation is an aggregate of cloud services each of which runs in a single datacenter. In such inter-datacenter environments, distributed key-value stores (DKVSs) are attractive databases in terms of scalability. However, inter-datacenter communications degrade the performance of these DKVSs because of their large latency and narrow bandwidth. In this paper, we demonstrate how to reduce and hide the weak points of inter-datacenter communications for DKVSs. To solve the problems we introduce two techniques called multi-layered DHT (ML-DHT) and local-first data rebuilding (LDR). ML-DHT provides a global and consistent index of key-value pairs with the efficient expandability of the storage capacity. It employs a routing protocol which reduces routing hops that pass through interdatacenter connections. LDR reduces data transfer on interdatacenter connections by using erasure coding techniques. It enables KVS administrators to flexibly make trade-offs between expandability of storage capacity and the performance of data transfer. Experimental results demonstrate that our techniques improve the latency up to 74 % compared with a Chord-based system and enable us to balance the amount of storage usage and remote data transfer.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117325335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Agent-Based Architecture for Resource Management in Cloud Data Centers 基于分层代理的云数据中心资源管理体系结构
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.128
F. Farahnakian, T. Pahikkala, P. Liljeberg, J. Plosila
In order to resource management in a large-scale data center, we present a hierarchical agent-based architecture. In this architecture, multi agents cooperate together to minimize the number of active physical machines according to the current resource requirements. We proposed a local agent in each physical machine (PM) to determine the PM's status and a global agent to optimizes VM placement based on PM's status. Experimental results show the proposed architecture can minimize energy consumption while maintaining an acceptable QoS.
为了实现大规模数据中心的资源管理,提出了一种基于分层代理的资源管理体系结构。在这个体系结构中,多个代理一起合作,根据当前的资源需求最小化活动物理机的数量。我们在每台物理机(PM)中提出了一个本地代理来确定PM的状态,并提出了一个全局代理来根据PM的状态优化VM的放置。实验结果表明,该架构可以在保持可接受的QoS的同时最小化能耗。
{"title":"Hierarchical Agent-Based Architecture for Resource Management in Cloud Data Centers","authors":"F. Farahnakian, T. Pahikkala, P. Liljeberg, J. Plosila","doi":"10.1109/CLOUD.2014.128","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.128","url":null,"abstract":"In order to resource management in a large-scale data center, we present a hierarchical agent-based architecture. In this architecture, multi agents cooperate together to minimize the number of active physical machines according to the current resource requirements. We proposed a local agent in each physical machine (PM) to determine the PM's status and a global agent to optimizes VM placement based on PM's status. Experimental results show the proposed architecture can minimize energy consumption while maintaining an acceptable QoS.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123921201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Scalability Analysis and Improvement of Hadoop Virtual Cluster with Cost Consideration 考虑成本的Hadoop虚拟集群可扩展性分析与改进
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.85
Yanzhang He, Xiaohong Jiang, Zhaohui Wu, Kejiang Ye, Zhongzhong Chen
With the rapid development of big data and cloud computing, big data analytics as a service in the cloud is becoming increasingly popular. More and more individuals and organizations tend to rent virtual cluster to store and analyze data rather than building their own data centers. However, in virtualization environment, whether scaling out using a cluster with more nodes to process big data is better than scaling up by adding more resources to the original virtual machines (VMs) in cluster is not clear. In this paper, we study the scalability performance issues of hadoop virtual cluster with cost consideration. We first present the design and implementation of VirtualMR platform which can provide users with scalable hadoop virtual cluster services for the MapReduce based big data analytics. Then we run a series of hadoop benchmarks and real parallel machine learning algorithms to evaluate the scalability performance, including scale-up method and scale-out method. Finally, we integrate our platform with resource monitoring module and propose a system tuner. By analyzing the monitored data, we dynamically adjust the parameters of hadoop framework and virtual machine configuration to improve resource utilization and reduce rent cost. Experimental results show that the scale-up method outperforms the scale-out method for CPU-bound applications, and it is opposite for I/O-bound applications. The results also verify the efficiency of system tuner to increase resource utilization and reduce rent cost.
随着大数据和云计算的快速发展,云中的大数据分析即服务越来越受欢迎。越来越多的个人和组织倾向于租用虚拟集群来存储和分析数据,而不是构建自己的数据中心。然而,在虚拟化环境中,使用更多节点的集群来处理大数据是否比通过在集群中原有虚拟机上增加更多资源来进行扩展更好,目前还不清楚。本文在考虑成本的情况下,研究了hadoop虚拟集群的可扩展性性能问题。本文首先提出了VirtualMR平台的设计与实现,该平台可以为用户提供可扩展的hadoop虚拟集群服务,用于基于MapReduce的大数据分析。然后,我们运行了一系列hadoop基准测试和真实的并行机器学习算法来评估可伸缩性性能,包括scale-up方法和scale-out方法。最后,我们将该平台与资源监控模块集成,并提出了一个系统调谐器。通过对监控数据的分析,动态调整hadoop框架参数和虚拟机配置,提高资源利用率,降低租金成本。实验结果表明,在cpu密集型应用程序中,扩展方法优于扩展方法,而在I/ o密集型应用程序中,扩展方法优于扩展方法。验证了系统调谐器在提高资源利用率和降低租金成本方面的有效性。
{"title":"Scalability Analysis and Improvement of Hadoop Virtual Cluster with Cost Consideration","authors":"Yanzhang He, Xiaohong Jiang, Zhaohui Wu, Kejiang Ye, Zhongzhong Chen","doi":"10.1109/CLOUD.2014.85","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.85","url":null,"abstract":"With the rapid development of big data and cloud computing, big data analytics as a service in the cloud is becoming increasingly popular. More and more individuals and organizations tend to rent virtual cluster to store and analyze data rather than building their own data centers. However, in virtualization environment, whether scaling out using a cluster with more nodes to process big data is better than scaling up by adding more resources to the original virtual machines (VMs) in cluster is not clear. In this paper, we study the scalability performance issues of hadoop virtual cluster with cost consideration. We first present the design and implementation of VirtualMR platform which can provide users with scalable hadoop virtual cluster services for the MapReduce based big data analytics. Then we run a series of hadoop benchmarks and real parallel machine learning algorithms to evaluate the scalability performance, including scale-up method and scale-out method. Finally, we integrate our platform with resource monitoring module and propose a system tuner. By analyzing the monitored data, we dynamically adjust the parameters of hadoop framework and virtual machine configuration to improve resource utilization and reduce rent cost. Experimental results show that the scale-up method outperforms the scale-out method for CPU-bound applications, and it is opposite for I/O-bound applications. The results also verify the efficiency of system tuner to increase resource utilization and reduce rent cost.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129491160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Capacity Allocation Approach for Volunteer Cloud Federations Using Poisson-Gamma Gibbs Sampling 使用泊松-伽马吉布斯抽样的志愿者云联盟容量分配方法
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.47
A. Rezgui, Gary Quezada, M. M. Rafique, Zaki Malik
In volunteer cloud federations (VCFs), volunteers join and leave without restrictions and may collectively contribute a large number of heterogeneous virtual machine instances. A challenge is to efficiently allocate this dynamic, heterogeneous capacity to a flow of incoming virtual machine (VM) instantiation requests, i.e., maximize the number of virtual machines that may be placed on the VCF. Cloud federations may allocate VMs far more efficiently if they can accurately predict the demand in terms of VM instantiation requests. In this paper, we present a stochastic technique that forecasts future demand to efficiently allocate VMs to VM instantiation requests. Our approach uses a Markov Chain Monte Carlo (MCMC) simulation known as the Poisson-Gamma Gibbs (PGG) sampler. The PGG sampler is used to determine the arrival rate of each type of VM instantiation requests. This arrival rate is then used to determine an optimal VM placement for the incoming VM instantiation requests. We compared our approach to a solution that adopts a static smallest-fit approach. The experimental results showed that our solution reacts quickly to abrupt changes in the frequency of VM instantiation requests and performs 10% better than the static smallest-fit approach in terms of the total number of satisfied requests.
在志愿者云联盟(vcf)中,志愿者可以不受限制地加入和离开,并且可以共同贡献大量异构虚拟机实例。一个挑战是如何有效地将这种动态的、异构的容量分配给传入的虚拟机(VM)实例化请求流,也就是说,最大化可以放置在VCF上的虚拟机的数量。如果云联合能够准确地预测VM实例化请求的需求,则可以更有效地分配VM。在本文中,我们提出了一种预测未来需求的随机技术,以有效地将VM分配给VM实例化请求。我们的方法使用马尔科夫链蒙特卡罗(MCMC)模拟,称为泊松-伽马吉布斯(PGG)采样器。PGG采样器用于确定每种类型的VM实例化请求的到达率。然后使用此到达率来确定传入VM实例化请求的最佳VM位置。我们将我们的方法与采用静态最小拟合方法的解决方案进行了比较。实验结果表明,我们的解决方案对VM实例化请求频率的突然变化做出快速反应,并且在满足请求总数方面比静态最小拟合方法性能提高10%。
{"title":"A Capacity Allocation Approach for Volunteer Cloud Federations Using Poisson-Gamma Gibbs Sampling","authors":"A. Rezgui, Gary Quezada, M. M. Rafique, Zaki Malik","doi":"10.1109/CLOUD.2014.47","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.47","url":null,"abstract":"In volunteer cloud federations (VCFs), volunteers join and leave without restrictions and may collectively contribute a large number of heterogeneous virtual machine instances. A challenge is to efficiently allocate this dynamic, heterogeneous capacity to a flow of incoming virtual machine (VM) instantiation requests, i.e., maximize the number of virtual machines that may be placed on the VCF. Cloud federations may allocate VMs far more efficiently if they can accurately predict the demand in terms of VM instantiation requests. In this paper, we present a stochastic technique that forecasts future demand to efficiently allocate VMs to VM instantiation requests. Our approach uses a Markov Chain Monte Carlo (MCMC) simulation known as the Poisson-Gamma Gibbs (PGG) sampler. The PGG sampler is used to determine the arrival rate of each type of VM instantiation requests. This arrival rate is then used to determine an optimal VM placement for the incoming VM instantiation requests. We compared our approach to a solution that adopts a static smallest-fit approach. The experimental results showed that our solution reacts quickly to abrupt changes in the frequency of VM instantiation requests and performs 10% better than the static smallest-fit approach in terms of the total number of satisfied requests.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124981206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2014 IEEE 7th International Conference on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1