首页 > 最新文献

2014 IEEE 7th International Conference on Cloud Computing最新文献

英文 中文
Enabling Performance as a Service for a Cloud Storage System 开启云存储性能即服务
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.80
Yang Li, Li Guo, A. Supratak, Yike Guo
One of the main contributions of the paper is that we introduce "performance as a service" as a key component for future cloud storage environments. This is achieved through demonstration of the design and implementation of a multi-tier cloud storage system (CACSS), and the illustration of a linear programming model that helps to predict future data access patterns for efficient data caching management. The proposed caching algorithm aims to leverage the cloud economy by incorporating both potential performance improvement and revenue-gain into the storage systems.
本文的主要贡献之一是我们引入了“性能即服务”作为未来云存储环境的关键组件。这是通过演示多层云存储系统(CACSS)的设计和实现,以及线性规划模型的说明来实现的,该模型有助于预测未来的数据访问模式,以实现有效的数据缓存管理。提出的缓存算法旨在通过将潜在的性能改进和收入增加结合到存储系统中来利用云经济。
{"title":"Enabling Performance as a Service for a Cloud Storage System","authors":"Yang Li, Li Guo, A. Supratak, Yike Guo","doi":"10.1109/CLOUD.2014.80","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.80","url":null,"abstract":"One of the main contributions of the paper is that we introduce \"performance as a service\" as a key component for future cloud storage environments. This is achieved through demonstration of the design and implementation of a multi-tier cloud storage system (CACSS), and the illustration of a linear programming model that helps to predict future data access patterns for efficient data caching management. The proposed caching algorithm aims to leverage the cloud economy by incorporating both potential performance improvement and revenue-gain into the storage systems.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134373377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Keeping Your API Keys in a Safe 将您的API密钥保存在保险箱中
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.143
Hongqian Karen Lu
Cloud API (Application Programming Interface) enables client applications to access services and manage resources hosted in the Cloud. To protect themselves and their customers, Cloud service providers (CSP) often require client authentication for each API call. The authentication usually depends on some kind of secret (or called API key), for example, secret access key, password, or access token. As such, the API key unlocks the door to the treasure inside the Cloud. Hence, protecting these keys is critical. It is a difficult task especially on the client side, such as users' computers or mobile devices. How do CSPs authenticate client applications? What are security risks of managing API keys in common practices? How can we mitigate these risks? This paper focuses on finding answers to these questions. By reviewing popular client authentication methods that CSPs use and using Cloud APIs as software developers, we identified various security risks associated with API keys. To mitigate these risks, we use hardware secure elements for secure key provisioning, storage, and usage. The solution replaces the manual key handling with end-to-end security between CSP and its customers' secure elements. This removes the root causes of the identified risks and enhances API security. It also enhances the usability by eliminating manual key operations and alleviating software developers' worries of working with cryptography.
云API(应用程序编程接口)使客户端应用程序能够访问服务和管理托管在云中的资源。为了保护自己和客户,云服务提供商(CSP)通常需要对每个API调用进行客户端身份验证。身份验证通常依赖于某种秘密(或称为API密钥),例如,秘密访问密钥、密码或访问令牌。因此,API钥匙打开了通往云内部宝藏的大门。因此,保护这些密钥至关重要。这是一项艰巨的任务,特别是在客户端,如用户的计算机或移动设备上。csp如何对客户机应用程序进行身份验证?在常见实践中管理API密钥的安全风险是什么?我们如何减轻这些风险?本文的重点是寻找这些问题的答案。通过回顾csp使用的流行客户端身份验证方法以及作为软件开发人员使用云API,我们确定了与API密钥相关的各种安全风险。为了降低这些风险,我们使用硬件安全元素来提供、存储和使用安全的密钥。该解决方案用CSP与其客户的安全元素之间的端到端安全性取代了手动密钥处理。这消除了已识别风险的根本原因,并增强了API安全性。它还通过消除手动密钥操作和减轻软件开发人员使用加密技术的担忧来增强可用性。
{"title":"Keeping Your API Keys in a Safe","authors":"Hongqian Karen Lu","doi":"10.1109/CLOUD.2014.143","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.143","url":null,"abstract":"Cloud API (Application Programming Interface) enables client applications to access services and manage resources hosted in the Cloud. To protect themselves and their customers, Cloud service providers (CSP) often require client authentication for each API call. The authentication usually depends on some kind of secret (or called API key), for example, secret access key, password, or access token. As such, the API key unlocks the door to the treasure inside the Cloud. Hence, protecting these keys is critical. It is a difficult task especially on the client side, such as users' computers or mobile devices. How do CSPs authenticate client applications? What are security risks of managing API keys in common practices? How can we mitigate these risks? This paper focuses on finding answers to these questions. By reviewing popular client authentication methods that CSPs use and using Cloud APIs as software developers, we identified various security risks associated with API keys. To mitigate these risks, we use hardware secure elements for secure key provisioning, storage, and usage. The solution replaces the manual key handling with end-to-end security between CSP and its customers' secure elements. This removes the root causes of the identified risks and enhances API security. It also enhances the usability by eliminating manual key operations and alleviating software developers' worries of working with cryptography.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133710301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Palantir: Reseizing Network Proximity in Large-Scale Distributed Computing Frameworks Using SDN Palantir:在使用SDN的大规模分布式计算框架中重新获取网络邻近性
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.66
Ze Yu, Min Li, Xin Yang, Xiaolin Li
Parallel/Distributed computing frameworks, such as MapReduce and Dryad, have been widely adopted to analyze massive data. Traditionally, these frameworks depend on manual configuration to acquire network proximity information to optimize the data placement and task scheduling. However, this approach is cumbersome, inflexible or even infeasible in largescale deployments, for example, across multiple datacenters. In this paper, we address this problem by utilizing the Software-Defined Networking (SDN) capability. We build Palantir, an SDN service specific for parallel/distributed computing frameworks to abstract the proximity information out of the network. Palantir frees the framework developers/ administrators from having to manually configure the network. In addition, Palantir is flexible because it allows different frameworks to define the proximity according to the framework-specific metrics. We design and implement a datacenter-aware MapReduce to demonstrate Palantir's usefullness. Our evaluation shows that, based on Palantir, datacenter-aware MapReduce achieves siginficant performance improvement.
并行/分布式计算框架,如MapReduce和Dryad,已被广泛用于分析海量数据。传统上,这些框架依赖于手动配置来获取网络接近信息,以优化数据放置和任务调度。然而,这种方法在大规模部署(例如跨多个数据中心的部署)中很麻烦、不灵活,甚至不可行。在本文中,我们通过利用软件定义网络(SDN)功能来解决这个问题。我们构建了Palantir,这是一种针对并行/分布式计算框架的SDN服务,用于从网络中抽象出邻近信息。Palantir将框架开发人员/管理员从手动配置网络中解放出来。此外,Palantir是灵活的,因为它允许不同的框架根据特定于框架的指标来定义接近度。我们设计并实现了一个数据中心感知的MapReduce来展示Palantir的实用性。我们的评估表明,基于Palantir,数据中心感知的MapReduce实现了显着的性能改进。
{"title":"Palantir: Reseizing Network Proximity in Large-Scale Distributed Computing Frameworks Using SDN","authors":"Ze Yu, Min Li, Xin Yang, Xiaolin Li","doi":"10.1109/CLOUD.2014.66","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.66","url":null,"abstract":"Parallel/Distributed computing frameworks, such as MapReduce and Dryad, have been widely adopted to analyze massive data. Traditionally, these frameworks depend on manual configuration to acquire network proximity information to optimize the data placement and task scheduling. However, this approach is cumbersome, inflexible or even infeasible in largescale deployments, for example, across multiple datacenters. In this paper, we address this problem by utilizing the Software-Defined Networking (SDN) capability. We build Palantir, an SDN service specific for parallel/distributed computing frameworks to abstract the proximity information out of the network. Palantir frees the framework developers/ administrators from having to manually configure the network. In addition, Palantir is flexible because it allows different frameworks to define the proximity according to the framework-specific metrics. We design and implement a datacenter-aware MapReduce to demonstrate Palantir's usefullness. Our evaluation shows that, based on Palantir, datacenter-aware MapReduce achieves siginficant performance improvement.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132148144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Speculative Execution for a Single Job in a MapReduce-Like System 类mapreduce系统中单个作业的推测执行
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.84
Huanle Xu, W. Lau
Parallel processing plays an important role for large-scale data analytics. It breaks a job into many small tasks which run parallel on multiple machines such as MapReduce framework. One fundamental challenge faced to such parallel processing is the straggling tasks as they can delay the completion of a job seriously. In this paper, we focus on the speculative execution issue which is used to deal with the straggling problem in the literature. We present a theoretical framework for the optimization of a single job which differs a lot from the previous heuristics-based work. More precisely, we propose two schemes when the number of parallel tasks the job consists of is smaller than cluster size. In the first scheme, no monitoring is needed and we can provide the job deadline guarantee with a high probability while achieve the optimal resource consumption level. The second scheme needs to monitor the task progress and makes the optimal number of duplicates when the straggling problem happens. On the other hand, when the number of tasks in a job is larger than the cluster size, we propose an Enhanced Speculative Execution (ESE) algorithm to make the optimal decision whenever a machine is available for a new scheduling. The simulation results show the ESE algorithm can reduce the job flow time by 50% while consume fewer resources comparing to the strategy without backup.
并行处理在大规模数据分析中起着重要的作用。它将一个作业分解成许多小任务,这些任务在多台机器上并行运行,比如MapReduce框架。这种并行处理面临的一个基本挑战是分散的任务,因为它们可能严重延迟任务的完成。本文主要研究了文献中用于处理离散问题的推测执行问题。我们提出了一个理论框架的优化一个单一的工作,不同于以往的启发式为基础的工作。更准确地说,我们提出了两种方案,当作业由并行任务组成的数量小于集群大小时。在第一种方案中,我们不需要监控,在达到最优资源消耗水平的同时,我们可以提供高概率的作业期限保证。第二种方案需要监控任务进度,并在出现散列问题时获得最优的副本数量。另一方面,当作业中的任务数量大于集群大小时,我们提出了一种增强推测执行(Enhanced Speculative Execution, ESE)算法,以便在机器可用于新调度时做出最优决策。仿真结果表明,与无备份策略相比,ESE算法可将作业流程时间缩短50%,同时消耗的资源更少。
{"title":"Speculative Execution for a Single Job in a MapReduce-Like System","authors":"Huanle Xu, W. Lau","doi":"10.1109/CLOUD.2014.84","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.84","url":null,"abstract":"Parallel processing plays an important role for large-scale data analytics. It breaks a job into many small tasks which run parallel on multiple machines such as MapReduce framework. One fundamental challenge faced to such parallel processing is the straggling tasks as they can delay the completion of a job seriously. In this paper, we focus on the speculative execution issue which is used to deal with the straggling problem in the literature. We present a theoretical framework for the optimization of a single job which differs a lot from the previous heuristics-based work. More precisely, we propose two schemes when the number of parallel tasks the job consists of is smaller than cluster size. In the first scheme, no monitoring is needed and we can provide the job deadline guarantee with a high probability while achieve the optimal resource consumption level. The second scheme needs to monitor the task progress and makes the optimal number of duplicates when the straggling problem happens. On the other hand, when the number of tasks in a job is larger than the cluster size, we propose an Enhanced Speculative Execution (ESE) algorithm to make the optimal decision whenever a machine is available for a new scheduling. The simulation results show the ESE algorithm can reduce the job flow time by 50% while consume fewer resources comparing to the strategy without backup.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"412 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132343429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Bridging the Virtualization Performance Gap for HPC Using SR-IOV for InfiniBand 利用SR-IOV为ib搭建高性能计算的虚拟化性能鸿沟
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.89
Malek Musleh, Vijay S. Pai, J. Walters, A. Younge, S. Crago
This paper shows that using SRIOV for InfiniBand can enable virtualized HPC, but only if the NIC tunable parameters are set appropriately. In particular, contrary to common belief, our results show that the default policy of aggressive use of interrupt moderation can have a negative impact on the performance of InfiniBand platforms virtualized using SR-IOV. Careful tuning of interrupt moderation benefits both Native and VM platforms and helps to bridge the gap between native and virtualized performance. For some workloads, the performance gap is reduced by 15-30%.
本文表明,在InfiniBand上使用SRIOV可以启用虚拟化HPC,但前提是网卡可调参数设置适当。特别是,与通常的看法相反,我们的结果表明,积极使用中断节制的默认策略可能对使用SR-IOV虚拟化的InfiniBand平台的性能产生负面影响。仔细调优中断调节对本机和VM平台都有好处,并有助于弥合本机和虚拟性能之间的差距。对于某些工作负载,性能差距减少了15-30%。
{"title":"Bridging the Virtualization Performance Gap for HPC Using SR-IOV for InfiniBand","authors":"Malek Musleh, Vijay S. Pai, J. Walters, A. Younge, S. Crago","doi":"10.1109/CLOUD.2014.89","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.89","url":null,"abstract":"This paper shows that using SRIOV for InfiniBand can enable virtualized HPC, but only if the NIC tunable parameters are set appropriately. In particular, contrary to common belief, our results show that the default policy of aggressive use of interrupt moderation can have a negative impact on the performance of InfiniBand platforms virtualized using SR-IOV. Careful tuning of interrupt moderation benefits both Native and VM platforms and helps to bridge the gap between native and virtualized performance. For some workloads, the performance gap is reduced by 15-30%.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
A Software Product Line Approach for Configuring Cloud Robotics Applications 配置云机器人应用的软件产品线方法
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.104
Luca Gherardi, D. Hunziker, Mohanarajah Gajamohan
The computational requirements of the increasingly sophisticated algorithms used in today's robotics software applications have outpaced the onboard processors of the average robot. Furthermore, the development and configuration of these applications are difficult tasks that require expertise in diverse domains, including software engineering, control engineering, and computer vision. As a solution to these problems, this paper extends and integrates our previous works, which are based on two promising techniques: Cloud Robotics and Software Product Lines. Cloud Robotics provides a powerful and scalable environment to offload the computationally expensive algorithms resulting in low-cost processors and light-weight robots. Software Product Lines allow the end user to deploy and configure complex robotics applications without dealing with low-level problems such as configuring algorithms and designing architectures. This paper discusses the proposed method in depth, and demonstrates its advantages with a case study.
当今机器人软件应用中使用的日益复杂的算法的计算需求已经超过了普通机器人的板载处理器。此外,这些应用程序的开发和配置是困难的任务,需要不同领域的专业知识,包括软件工程、控制工程和计算机视觉。为了解决这些问题,本文扩展并整合了我们之前的工作,这些工作基于两种有前途的技术:云机器人和软件产品线。云机器人提供了一个强大的和可扩展的环境,以卸载计算昂贵的算法,从而产生低成本的处理器和轻型机器人。软件产品线允许最终用户部署和配置复杂的机器人应用程序,而无需处理配置算法和设计架构等低级问题。本文对所提出的方法进行了深入的讨论,并通过实例说明了该方法的优越性。
{"title":"A Software Product Line Approach for Configuring Cloud Robotics Applications","authors":"Luca Gherardi, D. Hunziker, Mohanarajah Gajamohan","doi":"10.1109/CLOUD.2014.104","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.104","url":null,"abstract":"The computational requirements of the increasingly sophisticated algorithms used in today's robotics software applications have outpaced the onboard processors of the average robot. Furthermore, the development and configuration of these applications are difficult tasks that require expertise in diverse domains, including software engineering, control engineering, and computer vision. As a solution to these problems, this paper extends and integrates our previous works, which are based on two promising techniques: Cloud Robotics and Software Product Lines. Cloud Robotics provides a powerful and scalable environment to offload the computationally expensive algorithms resulting in low-cost processors and light-weight robots. Software Product Lines allow the end user to deploy and configure complex robotics applications without dealing with low-level problems such as configuring algorithms and designing architectures. This paper discusses the proposed method in depth, and demonstrates its advantages with a case study.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114369956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Core-Selecting Auctions for Dynamically Allocating Heterogeneous VMs in Cloud Computing 云计算中动态分配异构虚拟机的选核拍卖
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.30
Haoming Fu, Zongpeng Li, Chuan Wu, Xiaowen Chu
In a cloud market, the cloud provider provisions heterogeneous virtual machine (VM) instances from its resource pool, for allocation to cloud users. Auction-based allocations are efficient in assigning VMs to users who value them the most. Existing auction design often overlooks the heterogeneity of VMs, and does not consider dynamic, demand-driven VM provisioning. Moreover, the classic VCG auction leads to unsatisfactory seller revenues and vulnerability to a strategic bidding behavior known as shill bidding. This work presents a new type of core-selecting VM auctions, which are combinatorial auctions that always select bidder charges from the core of the price vector space, with guaranteed economic efficiency under truthful bidding. These auctions represent a comprehensive three-phase mechanism that instructs the cloud provider to judiciously assemble, allocate, and price VM bundles. They are proof against shills, can improve seller revenue over existing auction mechanisms, and can be tailored to maximize truthfulness.
在云市场中,云提供商从其资源池中提供异构虚拟机(VM)实例,以便分配给云用户。基于拍卖的分配可以有效地将虚拟机分配给最看重它们的用户。现有的拍卖设计通常忽略了虚拟机的异构性,并且没有考虑动态的、需求驱动的虚拟机供应。此外,经典的VCG拍卖会导致卖家收入不理想,并容易受到被称为“幌子投标”的战略性投标行为的影响。本文提出了一种新型的选择核心的虚拟机拍卖,它是一种组合拍卖,总是从价格向量空间的核心中选择竞标者的收费,保证了真实出价下的经济效率。这些拍卖代表了一个全面的三阶段机制,指导云提供商明智地组装、分配和定价VM包。它们是防止欺诈的证据,可以提高现有拍卖机制的卖家收入,并且可以定制以最大限度地提高真实性。
{"title":"Core-Selecting Auctions for Dynamically Allocating Heterogeneous VMs in Cloud Computing","authors":"Haoming Fu, Zongpeng Li, Chuan Wu, Xiaowen Chu","doi":"10.1109/CLOUD.2014.30","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.30","url":null,"abstract":"In a cloud market, the cloud provider provisions heterogeneous virtual machine (VM) instances from its resource pool, for allocation to cloud users. Auction-based allocations are efficient in assigning VMs to users who value them the most. Existing auction design often overlooks the heterogeneity of VMs, and does not consider dynamic, demand-driven VM provisioning. Moreover, the classic VCG auction leads to unsatisfactory seller revenues and vulnerability to a strategic bidding behavior known as shill bidding. This work presents a new type of core-selecting VM auctions, which are combinatorial auctions that always select bidder charges from the core of the price vector space, with guaranteed economic efficiency under truthful bidding. These auctions represent a comprehensive three-phase mechanism that instructs the cloud provider to judiciously assemble, allocate, and price VM bundles. They are proof against shills, can improve seller revenue over existing auction mechanisms, and can be tailored to maximize truthfulness.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132177647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Data Farming on Heterogeneous Clouds 异构云上的数据耕种
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.120
Dariusz Król, R. Słota, J. Kitowski, L. Dutka, Jakub Liput
Using multiple Clouds as a single environment to conduct simulation-based virtual experiments at a large-scale is a challenging problem. This paper describes how this can be achieved with the Scalarm platform in the context of data farming. In particular, a use case with a private Cloud combined with public, commercial Clouds is studied. We discuss the current architecture and implementation of Scalarm in terms of supporting different infrastructures, and propose how it can be extended in order to attain a unification of different Clouds usage. We discuss different aspects of the Cloud usage unification including: scheduling virtual machines, authentication, and virtual machine state monitoring. An experimental evaluation of the presented solution is conducted with a genetic algorithm solving the well-known Travel Salesman Problem. The evaluation uses three different resource configurations: using only public Cloud, using only private Cloud, and using both public and private Clouds.
使用多个云作为单一环境进行大规模的基于仿真的虚拟实验是一个具有挑战性的问题。本文描述了如何在数据农业的背景下使用Scalarm平台来实现这一点。特别地,研究了私有云与公共商业云相结合的用例。我们从支持不同基础设施的角度讨论了Scalarm的当前架构和实现,并提出了如何扩展它以实现不同云使用的统一。我们讨论了云使用统一的不同方面,包括:调度虚拟机、身份验证和虚拟机状态监控。用遗传算法求解著名的旅行推销员问题,对该方法进行了实验验证。评估使用三种不同的资源配置:仅使用公共云、仅使用私有云以及同时使用公共云和私有云。
{"title":"Data Farming on Heterogeneous Clouds","authors":"Dariusz Król, R. Słota, J. Kitowski, L. Dutka, Jakub Liput","doi":"10.1109/CLOUD.2014.120","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.120","url":null,"abstract":"Using multiple Clouds as a single environment to conduct simulation-based virtual experiments at a large-scale is a challenging problem. This paper describes how this can be achieved with the Scalarm platform in the context of data farming. In particular, a use case with a private Cloud combined with public, commercial Clouds is studied. We discuss the current architecture and implementation of Scalarm in terms of supporting different infrastructures, and propose how it can be extended in order to attain a unification of different Clouds usage. We discuss different aspects of the Cloud usage unification including: scheduling virtual machines, authentication, and virtual machine state monitoring. An experimental evaluation of the presented solution is conducted with a genetic algorithm solving the well-known Travel Salesman Problem. The evaluation uses three different resource configurations: using only public Cloud, using only private Cloud, and using both public and private Clouds.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133758903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
PowerCass: Energy Efficient, Consistent Hashing Based Storage for Micro Clouds Based Infrastructure PowerCass:基于微云基础设施的节能、一致的哈希存储
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.17
Frezewd Lemma Tena, Thomas Knauth, C. Fetzer
Consistent hash based storage systems are used in many real world applications for which energy is one of the main cost factors. However, these systems are typically designed and deployed without any mechanisms to save energy at times of low demand. We present an energy conserving implementation of a consistent hashing based key-value store, called PowerCass, based on Apache's Cassandra. In PowerCass, nodes are divided into three groups: active, dormant, and sleepy. Nodes in the active group store cover all the data and running continuously. Dormant nodes are only powered during peak activity time and for replica synchronization. Sleepy nodes are offline almost all the time except for replica synchronization and exceptional peak loads. With this simple and elegant approach we are able to reduce the energy consumption by up to 66% compared to the unmodified key-value store Cassandra.
基于一致散列的存储系统用于许多实际应用中,其中能源是主要成本因素之一。然而,这些系统的设计和部署通常没有任何机制来在低需求时节省能源。我们提出了一种基于一致散列的键值存储的节能实现,称为PowerCass,它基于Apache的Cassandra。在PowerCass中,节点分为三组:活动、休眠和休眠。活动组存储中的节点覆盖所有数据并持续运行。休眠节点仅在高峰活动时间和副本同步期间供电。除了副本同步和异常峰值负载之外,休眠节点几乎一直处于离线状态。与未修改的键值存储Cassandra相比,使用这种简单而优雅的方法,我们能够减少高达66%的能耗。
{"title":"PowerCass: Energy Efficient, Consistent Hashing Based Storage for Micro Clouds Based Infrastructure","authors":"Frezewd Lemma Tena, Thomas Knauth, C. Fetzer","doi":"10.1109/CLOUD.2014.17","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.17","url":null,"abstract":"Consistent hash based storage systems are used in many real world applications for which energy is one of the main cost factors. However, these systems are typically designed and deployed without any mechanisms to save energy at times of low demand. We present an energy conserving implementation of a consistent hashing based key-value store, called PowerCass, based on Apache's Cassandra. In PowerCass, nodes are divided into three groups: active, dormant, and sleepy. Nodes in the active group store cover all the data and running continuously. Dormant nodes are only powered during peak activity time and for replica synchronization. Sleepy nodes are offline almost all the time except for replica synchronization and exceptional peak loads. With this simple and elegant approach we are able to reduce the energy consumption by up to 66% compared to the unmodified key-value store Cassandra.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129062243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Time-Constrained Live VM Migration in Share-Nothing IaaS-Clouds 无共享iaas - cloud中的限时热迁移
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.18
Konstantinos Tsakalozos, Vasilis Verroios, M. Roussopoulos, A. Delis
Both economic reasons and interoperation requirements necessitate the deployment of IaaS-clouds based on a share-nothing architecture. Here, live VM migration becomes a major impediment to achieving cloud-wide load balancing via selective and timely VM-migrations. Our approach is based on copying virtual disk images and keeping them synchronized during the VM migration operation. In this way, we ameliorate the limitations set by shared storage cloud designs as we place no constraints on the cloud's scalability and load-balancing capabilities. We propose a special-purpose file system, termed MigrateFS, that performs virtual disk replication within specified time-constraints while avoiding internal network congestion. Management of resource consumption during VM migration is supervised by a low-overhead and scalable distributed network of brokers. We show that our approach can reduce up to 24% the stress of already saturated physical network links during load balancing operations.
经济原因和互操作需求都要求部署基于无共享架构的iaas云。在这里,通过选择性和及时的VM迁移,实时VM迁移成为实现云范围负载平衡的主要障碍。我们的方法基于复制虚拟磁盘映像并在VM迁移操作期间保持它们同步。通过这种方式,我们改善了共享存储云设计所设置的限制,因为我们对云的可伸缩性和负载平衡能力没有任何限制。我们提出了一个特殊用途的文件系统,称为MigrateFS,它在指定的时间限制内执行虚拟磁盘复制,同时避免内部网络拥塞。VM迁移期间的资源消耗管理由低开销和可伸缩的分布式代理网络监督。我们表明,我们的方法可以在负载平衡操作期间减少已经饱和的物理网络链路的压力高达24%。
{"title":"Time-Constrained Live VM Migration in Share-Nothing IaaS-Clouds","authors":"Konstantinos Tsakalozos, Vasilis Verroios, M. Roussopoulos, A. Delis","doi":"10.1109/CLOUD.2014.18","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.18","url":null,"abstract":"Both economic reasons and interoperation requirements necessitate the deployment of IaaS-clouds based on a share-nothing architecture. Here, live VM migration becomes a major impediment to achieving cloud-wide load balancing via selective and timely VM-migrations. Our approach is based on copying virtual disk images and keeping them synchronized during the VM migration operation. In this way, we ameliorate the limitations set by shared storage cloud designs as we place no constraints on the cloud's scalability and load-balancing capabilities. We propose a special-purpose file system, termed MigrateFS, that performs virtual disk replication within specified time-constraints while avoiding internal network congestion. Management of resource consumption during VM migration is supervised by a low-overhead and scalable distributed network of brokers. We show that our approach can reduce up to 24% the stress of already saturated physical network links during load balancing operations.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127655064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
2014 IEEE 7th International Conference on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1