首页 > 最新文献

2012 IEEE Fifth International Conference on Cloud Computing最新文献

英文 中文
Sharing-Aware Cloud-Based Mobile Outsourcing 共享感知的基于云的移动外包
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.48
C. Mei, Daniel Taylor, Chenyu Wang, A. Chandra, J. Weissman
Mobile devices, such as smart phones and tablets, are becoming the universal interface to online services and applications. However, such devices have limited computational power and battery life, which limits their ability to execute resource-intensive applications. Computation outsourcing to external resources has been proposed as a technique to alleviate this problem. Most existing work on mobile outsourcing has focused on either single application optimization or outsourcing to fixed, local resources, with the assumption that wide-area latency is prohibitively high. However, the opportunity of improving the outsourcing performance by utilizing the relation among multiple applications and optimizing the server provisioning is neglected. In this paper, we present the design and implementation of an Android/Amazon EC2-based mobile application outsourcing framework, leveraging the cloud for scalability, elasticity, and multi-user code/data sharing. Using this framework, we empirically demonstrate that the cloud is not only feasible but desirable as an offloading platform for latency-tolerant applications. We have proposed to use data mining techniques to detect data sharing across multiple applications, and developed novel scheduling algorithms that exploit such data sharing for better outsourcing performance. Additionally, our platform is designed to dynamically scale to support a large number of mobile users concurrently. Experiments show that our proposed techniques and algorithms substantially improve application performance, while achieving high efficiency in terms of computation resource and network usage.
移动设备,如智能手机和平板电脑,正在成为在线服务和应用程序的通用接口。然而,这种设备的计算能力和电池寿命有限,这限制了它们执行资源密集型应用程序的能力。计算外包给外部资源已经被提出作为一种技术来缓解这个问题。大多数关于移动外包的现有工作都集中在单个应用程序优化或外包到固定的本地资源上,并假设广域延迟非常高。然而,通过利用多个应用程序之间的关系和优化服务器配置来提高外包性能的机会被忽略了。在本文中,我们提出了一个基于Android/Amazon ec2的移动应用外包框架的设计和实现,利用云来实现可伸缩性、弹性和多用户代码/数据共享。使用这个框架,我们从经验上证明,云不仅是可行的,而且是可取的,作为一个卸载平台的延迟容忍应用程序。我们建议使用数据挖掘技术来检测跨多个应用程序的数据共享,并开发了新的调度算法,利用这种数据共享来提高外包性能。此外,我们的平台被设计为动态扩展,以同时支持大量移动用户。实验表明,我们提出的技术和算法大大提高了应用程序的性能,同时在计算资源和网络使用方面达到了很高的效率。
{"title":"Sharing-Aware Cloud-Based Mobile Outsourcing","authors":"C. Mei, Daniel Taylor, Chenyu Wang, A. Chandra, J. Weissman","doi":"10.1109/CLOUD.2012.48","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.48","url":null,"abstract":"Mobile devices, such as smart phones and tablets, are becoming the universal interface to online services and applications. However, such devices have limited computational power and battery life, which limits their ability to execute resource-intensive applications. Computation outsourcing to external resources has been proposed as a technique to alleviate this problem. Most existing work on mobile outsourcing has focused on either single application optimization or outsourcing to fixed, local resources, with the assumption that wide-area latency is prohibitively high. However, the opportunity of improving the outsourcing performance by utilizing the relation among multiple applications and optimizing the server provisioning is neglected. In this paper, we present the design and implementation of an Android/Amazon EC2-based mobile application outsourcing framework, leveraging the cloud for scalability, elasticity, and multi-user code/data sharing. Using this framework, we empirically demonstrate that the cloud is not only feasible but desirable as an offloading platform for latency-tolerant applications. We have proposed to use data mining techniques to detect data sharing across multiple applications, and developed novel scheduling algorithms that exploit such data sharing for better outsourcing performance. Additionally, our platform is designed to dynamically scale to support a large number of mobile users concurrently. Experiments show that our proposed techniques and algorithms substantially improve application performance, while achieving high efficiency in terms of computation resource and network usage.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114267585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Energy Management in IaaS Clouds: A Holistic Approach IaaS云中的能源管理:一个整体的方法
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.50
Eugen Feller, C. Rohr, D. Margery, C. Morin
Energy efficiency has now become one of the major design constraints for current and future cloud data center operators. One way to conserve energy is to transition idle servers into a lower power-state (e.g. suspend). Therefore, virtual machine (VM) placement and dynamic VM scheduling algorithms are proposed to facilitate the creation of idle times. However, these algorithms are rarely integrated in a holistic approach and experimentally evaluated in a realistic environment. In this paper we present the energy management algorithms and mechanisms of a novel holistic energy-aware VM management framework for private clouds called Snooze. We conduct an extensive evaluation of the energy and performance implications of our system on 34 power-metered machines of the Grid'5000 experimentation testbed under dynamic web workloads. The results show that the energy saving mechanisms allow Snooze to dynamically scale data center energy consumption proportionally to the load, thus achieving substantial energy savings with only limited impact on application performance.
能源效率现在已经成为当前和未来云数据中心运营商的主要设计限制之一。节约能源的一种方法是将空闲服务器转换为低功耗状态(例如挂起)。因此,提出了虚拟机(VM)放置和动态虚拟机调度算法,以方便空闲时间的创建。然而,这些算法很少集成在一个整体的方法和实验评估在现实环境中。在本文中,我们提出了一种名为Snooze的用于私有云的新型整体能量感知VM管理框架的能量管理算法和机制。在动态web工作负载下,我们在Grid的5000个实验测试平台中的34台功率计机器上对我们的系统的能源和性能影响进行了广泛的评估。结果表明,节能机制允许Snooze根据负载动态扩展数据中心的能耗,从而在对应用程序性能影响有限的情况下实现大量节能。
{"title":"Energy Management in IaaS Clouds: A Holistic Approach","authors":"Eugen Feller, C. Rohr, D. Margery, C. Morin","doi":"10.1109/CLOUD.2012.50","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.50","url":null,"abstract":"Energy efficiency has now become one of the major design constraints for current and future cloud data center operators. One way to conserve energy is to transition idle servers into a lower power-state (e.g. suspend). Therefore, virtual machine (VM) placement and dynamic VM scheduling algorithms are proposed to facilitate the creation of idle times. However, these algorithms are rarely integrated in a holistic approach and experimentally evaluated in a realistic environment. In this paper we present the energy management algorithms and mechanisms of a novel holistic energy-aware VM management framework for private clouds called Snooze. We conduct an extensive evaluation of the energy and performance implications of our system on 34 power-metered machines of the Grid'5000 experimentation testbed under dynamic web workloads. The results show that the energy saving mechanisms allow Snooze to dynamically scale data center energy consumption proportionally to the load, thus achieving substantial energy savings with only limited impact on application performance.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125933592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Energy-Price-Driven Request Dispatching for Cloud Data Centers 能源价格驱动的云数据中心请求调度
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.115
Takumi Sakamoto, H. Yamada, H. Horie, K. Kono
Cloud services make use of data center resources so that hosted applications can utilize them as needed. To offer a large amount of computational resources, cloud service providers manage tens of geographically distributed data centers. Since each data center is made up of hundreds of thousands of physical machines, energy consumption is a major concern for cloud service providers. The electric cost imposes significant financial overheads on those companies and pushes up the price for the cloud users. This paper presents an energy-price-driven request dispatcher that forwards client requests to data centers in an electric-cost-saving way. In our technique, mapping nodes, which are used as authoritative DNS servers, forward client requests to data centers in which the electric price is relatively lower. We additionally develop a policy that gradually shifts client requests to electrically cheaper data centers, taking into account application latency requirements and data center loads. Our simulation-based results show that our technique can reduce electric cost by 15% more than randomly dispatching client requests.
云服务利用数据中心资源,以便托管的应用程序可以根据需要利用它们。为了提供大量的计算资源,云服务提供商管理着数十个地理上分布的数据中心。由于每个数据中心都由数十万台物理机器组成,因此能源消耗是云服务提供商的主要关注点。电力成本给这些公司带来了巨大的财务开销,并推高了云用户的价格。提出了一种能源价格驱动的请求调度器,以节约电力成本的方式将客户端请求转发到数据中心。在我们的技术中,映射节点用作权威DNS服务器,将客户端请求转发到电价相对较低的数据中心。我们还制定了一项策略,考虑到应用程序延迟需求和数据中心负载,逐步将客户端请求转移到电力更便宜的数据中心。仿真结果表明,与随机调度客户端请求相比,该技术可减少15%以上的电力成本。
{"title":"Energy-Price-Driven Request Dispatching for Cloud Data Centers","authors":"Takumi Sakamoto, H. Yamada, H. Horie, K. Kono","doi":"10.1109/CLOUD.2012.115","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.115","url":null,"abstract":"Cloud services make use of data center resources so that hosted applications can utilize them as needed. To offer a large amount of computational resources, cloud service providers manage tens of geographically distributed data centers. Since each data center is made up of hundreds of thousands of physical machines, energy consumption is a major concern for cloud service providers. The electric cost imposes significant financial overheads on those companies and pushes up the price for the cloud users. This paper presents an energy-price-driven request dispatcher that forwards client requests to data centers in an electric-cost-saving way. In our technique, mapping nodes, which are used as authoritative DNS servers, forward client requests to data centers in which the electric price is relatively lower. We additionally develop a policy that gradually shifts client requests to electrically cheaper data centers, taking into account application latency requirements and data center loads. Our simulation-based results show that our technique can reduce electric cost by 15% more than randomly dispatching client requests.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126786166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Topology-Aware Deployment of Scientific Applications in Cloud Computing 云计算中科学应用的拓扑感知部署
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.70
Pei Fan, Zhenbang Chen, Ji Wang, Zibin Zheng, Michael R. Lyu
Nowadays, more and more scientific applications are moving to cloud computing. The optimal deployment of scientific applications is critical for providing good services to users. Scientific applications are usually topology-aware applications. Therefore, considering the topology of a scientific application during the development will benefit the performance of the application. However, it is challenging to automatically discover and make use of the communication pattern of a scientific application while deploying the application on cloud. To attack this challenge, in this paper, we propose a framework to discover the communication topology of a scientific application by pre-execution and multi-scale graph clustering, based on which the deployment can be optimized. Comprehensive experiments are conducted by employing a well-known MPI benchmark and comparing the performance of our method with those of other methods. The experimental results show the effectiveness of our topology-aware deployment method.
如今,越来越多的科学应用正在转向云计算。科学应用的优化部署对于向用户提供良好的服务至关重要。科学应用通常是拓扑感知应用。因此,在开发过程中考虑科学应用的拓扑结构将有利于提高应用的性能。然而,在云上部署科学应用程序时,如何自动发现和利用科学应用程序的通信模式是一个挑战。为了应对这一挑战,本文提出了一个框架,通过预执行和多尺度图聚类来发现科学应用程序的通信拓扑,并在此基础上优化部署。采用一个著名的MPI基准进行了全面的实验,并将我们的方法与其他方法进行了性能比较。实验结果表明了拓扑感知部署方法的有效性。
{"title":"Topology-Aware Deployment of Scientific Applications in Cloud Computing","authors":"Pei Fan, Zhenbang Chen, Ji Wang, Zibin Zheng, Michael R. Lyu","doi":"10.1109/CLOUD.2012.70","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.70","url":null,"abstract":"Nowadays, more and more scientific applications are moving to cloud computing. The optimal deployment of scientific applications is critical for providing good services to users. Scientific applications are usually topology-aware applications. Therefore, considering the topology of a scientific application during the development will benefit the performance of the application. However, it is challenging to automatically discover and make use of the communication pattern of a scientific application while deploying the application on cloud. To attack this challenge, in this paper, we propose a framework to discover the communication topology of a scientific application by pre-execution and multi-scale graph clustering, based on which the deployment can be optimized. Comprehensive experiments are conducted by employing a well-known MPI benchmark and comparing the performance of our method with those of other methods. The experimental results show the effectiveness of our topology-aware deployment method.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127041389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Preemption-Aware Energy Management in Virtualized Data Centers 虚拟化数据中心的抢占感知能源管理
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.147
M. Salehi, P. Radha, Krishna Krishnamurty, Sai Deepak, R. Buyya
Energy efficiency is one of the main challenge hat data centers are facing nowadays. A considerable portion of the consumed energy in these environments is wasted because of idling resources. To avoid wastage, offering services with variety of SLAs (with different prices and priorities) is a common practice. The question we investigate in this research is how the energy consumption of a data center that offers various SLAs can be reduced. To answer this question we propose an adaptive energy management policy that employs virtual machine(VM) preemption to adjust the energy consumption based on user performance requirements. We have implementedour proposed energy management policy in Haize a as a real scheduling platform for virtualized data centers. Experimental results reveal 18% energy conservation (up to 4000 kWh in 30 days) comparing with other baseline policies without any major increase in SLA violation.
能源效率是当今数据中心面临的主要挑战之一。在这些环境中,由于资源闲置,消耗的能源中有相当一部分被浪费了。为了避免浪费,提供具有各种sla(具有不同价格和优先级)的服务是一种常见做法。我们在本研究中调查的问题是如何降低提供各种sla的数据中心的能耗。为了回答这个问题,我们提出了一种自适应能源管理策略,该策略采用虚拟机(VM)抢占来根据用户的性能需求调整能耗。我们已经在海泽a实现了我们提出的能源管理策略,作为虚拟化数据中心的真正调度平台。实验结果显示,与其他基准政策相比,节能18%(30天内高达4000千瓦时),而违反SLA的情况没有明显增加。
{"title":"Preemption-Aware Energy Management in Virtualized Data Centers","authors":"M. Salehi, P. Radha, Krishna Krishnamurty, Sai Deepak, R. Buyya","doi":"10.1109/CLOUD.2012.147","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.147","url":null,"abstract":"Energy efficiency is one of the main challenge hat data centers are facing nowadays. A considerable portion of the consumed energy in these environments is wasted because of idling resources. To avoid wastage, offering services with variety of SLAs (with different prices and priorities) is a common practice. The question we investigate in this research is how the energy consumption of a data center that offers various SLAs can be reduced. To answer this question we propose an adaptive energy management policy that employs virtual machine(VM) preemption to adjust the energy consumption based on user performance requirements. We have implementedour proposed energy management policy in Haize a as a real scheduling platform for virtualized data centers. Experimental results reveal 18% energy conservation (up to 4000 kWh in 30 days) comparing with other baseline policies without any major increase in SLA violation.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125153748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Scheduling Parallel Tasks onto Opportunistically Available Cloud Resources 将并行任务调度到机会可用的云资源上
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.15
T. He, Shiyao Chen, Hyoil Kim, L. Tong, Kang-Won Lee
We consider the problem of opportunistically scheduling low-priority tasks onto underutilized computation resources in the cloud left by high-priority tasks. To avoid conflicts with high-priority tasks, the scheduler must suspend the low-priority tasks (causing waiting), or move them to other underutilized servers (causing migration), if the high-priority tasks resume. The goal of opportunistic scheduling is to schedule the low-priority tasks onto intermittently available server resources while minimizing the combined cost of waiting and migration. Moreover, we aim to support multiple parallel low-priority tasks with synchronization constraints. Under the assumption that servers' availability to low-priority tasks can be modeled as ON/OFF Markov chains, we have shown that the optimal solution requires solving a Markov Decision Process (MDP) that has exponential complexity, and efficient solutions are known only in the case of homogeneously behaving servers. In this paper, we propose an efficient heuristic scheduling policy by formulating the problem as restless Multi-Armed Bandits (MAB) under relaxed synchronization. We prove the index ability of the problem and provide closed-form formulas to compute the indices. Our evaluation using real data center traces shows that the performance result closely matches the prediction by the Markov chain model, and the proposed index policy achieves consistently good performance under various server dynamics compared with the existing policies.
我们考虑机会性地将低优先级任务调度到高优先级任务留下的未充分利用的云计算资源上的问题。为了避免与高优先级任务发生冲突,调度器必须挂起低优先级任务(导致等待),或者在高优先级任务恢复时将它们移动到其他未充分利用的服务器(导致迁移)。机会调度的目标是将低优先级任务调度到间歇性可用的服务器资源上,同时最小化等待和迁移的综合成本。此外,我们的目标是支持具有同步约束的多个并行低优先级任务。假设服务器对低优先级任务的可用性可以建模为ON/OFF马尔可夫链,我们已经证明了最优解决方案需要解决具有指数复杂性的马尔可夫决策过程(MDP),并且只有在行为均匀的服务器的情况下才知道有效的解决方案。本文提出了一种有效的启发式调度策略,将问题描述为松弛同步下的不动多武装强盗(MAB)问题。证明了该问题的指标能力,并给出了计算指标的封闭公式。我们使用真实数据中心轨迹进行的评估表明,性能结果与马尔可夫链模型的预测结果非常吻合,并且与现有策略相比,所提出的索引策略在各种服务器动态下都取得了一致的良好性能。
{"title":"Scheduling Parallel Tasks onto Opportunistically Available Cloud Resources","authors":"T. He, Shiyao Chen, Hyoil Kim, L. Tong, Kang-Won Lee","doi":"10.1109/CLOUD.2012.15","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.15","url":null,"abstract":"We consider the problem of opportunistically scheduling low-priority tasks onto underutilized computation resources in the cloud left by high-priority tasks. To avoid conflicts with high-priority tasks, the scheduler must suspend the low-priority tasks (causing waiting), or move them to other underutilized servers (causing migration), if the high-priority tasks resume. The goal of opportunistic scheduling is to schedule the low-priority tasks onto intermittently available server resources while minimizing the combined cost of waiting and migration. Moreover, we aim to support multiple parallel low-priority tasks with synchronization constraints. Under the assumption that servers' availability to low-priority tasks can be modeled as ON/OFF Markov chains, we have shown that the optimal solution requires solving a Markov Decision Process (MDP) that has exponential complexity, and efficient solutions are known only in the case of homogeneously behaving servers. In this paper, we propose an efficient heuristic scheduling policy by formulating the problem as restless Multi-Armed Bandits (MAB) under relaxed synchronization. We prove the index ability of the problem and provide closed-form formulas to compute the indices. Our evaluation using real data center traces shows that the performance result closely matches the prediction by the Markov chain model, and the proposed index policy achieves consistently good performance under various server dynamics compared with the existing policies.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131454253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Capturing Cloud Computing Knowledge and Experience in Patterns 在模式中获取云计算知识和经验
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.124
Christoph Fehling, Thilo Ewald, F. Leymann, Michael Pauly, Jochen Rütschlin, D. Schumm
The industry-driven evolution of cloud computing tends to obfuscate the common underlying architectural concepts of cloud offerings and their implications on hosted applications. Patterns are one way to document such architectural principles and to make good solutions to reoccurring (architectural) cloud challenges reusable. To capture cloud computing best practice from existing cloud applications and provider-specific documentation, we propose to use an elaborated pattern format enabling abstraction of concepts and reusability of knowledge in various use cases. We present a detailed step-by-step pattern identification process supported by a pattern authoring toolkit. We continuously apply this process to identify a large set of cloud patterns. In this paper, we introduce two new cloud patterns we identified in industrial scenarios recently. The approach aims at cloud architects, developers, and researchers alike to also apply this pattern identification process to create traceable and well-structured pieces of knowledge in their individual field of expertise. As entry point, we recap challenges introduced by cloud computing in various domains.
行业驱动的云计算发展往往会混淆云产品的常见底层架构概念及其对托管应用程序的影响。模式是记录此类体系结构原则的一种方法,可以使反复出现的(体系结构)云挑战的良好解决方案可重用。为了从现有的云应用程序和特定于提供商的文档中获取云计算最佳实践,我们建议使用一种精心设计的模式格式,以便在各种用例中实现概念抽象和知识的可重用性。我们提供了一个由模式创作工具包支持的详细的逐步模式识别过程。我们不断地应用这个过程来识别大量的云模式。在本文中,我们介绍了最近在工业场景中发现的两种新的云模式。该方法的目标是让云架构师、开发人员和研究人员也同样应用此模式识别过程,在他们各自的专业领域中创建可跟踪且结构良好的知识片段。作为切入点,我们将回顾云计算在各个领域带来的挑战。
{"title":"Capturing Cloud Computing Knowledge and Experience in Patterns","authors":"Christoph Fehling, Thilo Ewald, F. Leymann, Michael Pauly, Jochen Rütschlin, D. Schumm","doi":"10.1109/CLOUD.2012.124","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.124","url":null,"abstract":"The industry-driven evolution of cloud computing tends to obfuscate the common underlying architectural concepts of cloud offerings and their implications on hosted applications. Patterns are one way to document such architectural principles and to make good solutions to reoccurring (architectural) cloud challenges reusable. To capture cloud computing best practice from existing cloud applications and provider-specific documentation, we propose to use an elaborated pattern format enabling abstraction of concepts and reusability of knowledge in various use cases. We present a detailed step-by-step pattern identification process supported by a pattern authoring toolkit. We continuously apply this process to identify a large set of cloud patterns. In this paper, we introduce two new cloud patterns we identified in industrial scenarios recently. The approach aims at cloud architects, developers, and researchers alike to also apply this pattern identification process to create traceable and well-structured pieces of knowledge in their individual field of expertise. As entry point, we recap challenges introduced by cloud computing in various domains.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131916415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
A Performance Interference Model for Managing Consolidated Workloads in QoS-Aware Clouds 在qos感知云中管理整合工作负载的性能干扰模型
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.25
Qian Zhu, Teresa Tung
Cloud computing offers users the ability to access large pools of computational and storage resources on-demand without the burden of managing and maintaining their own IT assets. Today's cloud providers charge users based upon the amount of resources used or reserved, with only minimal guarantees of the quality-of-service (QoS) experienced byte users applications. As virtualization technologies proliferate among cloud providers, consolidating multiple user applications onto multi-core servers increases revenue and improves resource utilization. However, consolidation introduces performance interference between co-located workloads, which significantly impacts application QoS. A critical requirement for effective consolidation is to be able to predict the impact of application performance in the presence of interference from on-chip resources, e.g., CPU and last-level cache (LLC)/memory bandwidth sharing, to storage devices and network bandwidth contention. In this work, we propose an interference model which predicts the application QoS metric. The key distinctive feature is the consideration of time-variant inter-dependency among different levels of resource interference. We use applications from a test suite and SPECWeb2005 to illustrate the effectiveness of our model and an average prediction error of less than 8% is achieved. Furthermore, we demonstrate using the proposed interference model to optimize the cloud provider's metric (here the number of successfully executed applications) to realize better workload placement decisions and thereby maintaining the user's application QoS.
云计算为用户提供了按需访问大型计算池和存储资源的能力,而无需管理和维护自己的IT资产。今天的云提供商根据使用或保留的资源量向用户收费,对字节用户应用程序的服务质量(QoS)只有最低限度的保证。随着虚拟化技术在云提供商之间的普及,将多个用户应用程序整合到多核服务器上可以增加收入并提高资源利用率。但是,整合会在位于同一位置的工作负载之间引入性能干扰,从而严重影响应用程序的QoS。有效整合的一个关键要求是能够预测芯片上资源(例如,CPU和最后一级缓存(LLC)/内存带宽共享)对存储设备和网络带宽争用的干扰对应用程序性能的影响。在这项工作中,我们提出了一个预测应用QoS度量的干扰模型。其主要特点是考虑了不同程度的资源干扰之间的时变相互依赖性。我们使用来自测试套件和SPECWeb2005的应用程序来说明我们模型的有效性,并且平均预测误差小于8%。此外,我们还演示了使用建议的干扰模型来优化云提供商的度量(这里是成功执行的应用程序的数量),以实现更好的工作负载放置决策,从而维护用户的应用程序QoS。
{"title":"A Performance Interference Model for Managing Consolidated Workloads in QoS-Aware Clouds","authors":"Qian Zhu, Teresa Tung","doi":"10.1109/CLOUD.2012.25","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.25","url":null,"abstract":"Cloud computing offers users the ability to access large pools of computational and storage resources on-demand without the burden of managing and maintaining their own IT assets. Today's cloud providers charge users based upon the amount of resources used or reserved, with only minimal guarantees of the quality-of-service (QoS) experienced byte users applications. As virtualization technologies proliferate among cloud providers, consolidating multiple user applications onto multi-core servers increases revenue and improves resource utilization. However, consolidation introduces performance interference between co-located workloads, which significantly impacts application QoS. A critical requirement for effective consolidation is to be able to predict the impact of application performance in the presence of interference from on-chip resources, e.g., CPU and last-level cache (LLC)/memory bandwidth sharing, to storage devices and network bandwidth contention. In this work, we propose an interference model which predicts the application QoS metric. The key distinctive feature is the consideration of time-variant inter-dependency among different levels of resource interference. We use applications from a test suite and SPECWeb2005 to illustrate the effectiveness of our model and an average prediction error of less than 8% is achieved. Furthermore, we demonstrate using the proposed interference model to optimize the cloud provider's metric (here the number of successfully executed applications) to realize better workload placement decisions and thereby maintaining the user's application QoS.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128399301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
A Tiered Strategy for Auditing in the Cloud 云中的分级审计策略
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.144
Rui Xie, R. Gamble
In this paper, we outline a tiered approach to auditing information in the cloud. The approach provides perspectives on auditable events that may include compositions of independently formed audit trails. Filtering and reasoning over the audit trails can manifest potential security vulnerabilities and performance attributes as desired by stakeholders.
在本文中,我们概述了一种分层方法来审计云中的信息。该方法提供了可审计事件的透视图,这些事件可能包括独立形成的审计跟踪的组合。对审计跟踪进行过滤和推理可以根据涉众的需要显示潜在的安全漏洞和性能属性。
{"title":"A Tiered Strategy for Auditing in the Cloud","authors":"Rui Xie, R. Gamble","doi":"10.1109/CLOUD.2012.144","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.144","url":null,"abstract":"In this paper, we outline a tiered approach to auditing information in the cloud. The approach provides perspectives on auditable events that may include compositions of independently formed audit trails. Filtering and reasoning over the audit trails can manifest potential security vulnerabilities and performance attributes as desired by stakeholders.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134059205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Defining and Implementing Connection Anonymity for SaaS Web Services 定义和实现SaaS Web服务的连接匿名性
Pub Date : 2012-06-24 DOI: 10.1109/CLOUD.2012.88
Vinícius M. Pacheco, R. Puttini
In this paper, we define practical schemes to protect the cloud consumer's identity (ID) during message exchanges (connection anonymity) in SaaS. We describe the typical/target scenario for service consumption and provide a detailed privacy assessment. This is used to identify different levels of interactions between consumers and providers, as well as to evaluate how privacy is affected. We propose a multi-layered anonymity framework, where different anonymity techniques are employed together to protect ID, location, behavior and data privacy, during each level of consumer-provider interaction. We also define two schemes for generating and managing anonymous credentials, which are used to implement the proposed framework. These schemes provide two options of connection anonymity: traceable (anonymity can be disclosed, if required) and untraceable (anonymity cannot be disclosed). The consumer and provider will be able to choose which is more suitable to their needs and regulatory environments.
在本文中,我们定义了在SaaS中的消息交换(连接匿名)期间保护云消费者身份(ID)的实用方案。我们描述了服务消费的典型/目标场景,并提供了详细的隐私评估。这用于识别消费者和提供者之间不同层次的交互,以及评估隐私如何受到影响。我们提出了一个多层匿名框架,其中不同的匿名技术一起使用,以保护身份,位置,行为和数据隐私,在每一个层次的消费者-供应商交互。我们还定义了两个用于生成和管理匿名凭证的方案,用于实现所提出的框架。这些方案提供了两种连接匿名选项:可跟踪(如果需要,可以公开匿名)和不可跟踪(不能公开匿名)。使用者和提供者将能够选择哪个更适合他们的需求和监管环境。
{"title":"Defining and Implementing Connection Anonymity for SaaS Web Services","authors":"Vinícius M. Pacheco, R. Puttini","doi":"10.1109/CLOUD.2012.88","DOIUrl":"https://doi.org/10.1109/CLOUD.2012.88","url":null,"abstract":"In this paper, we define practical schemes to protect the cloud consumer's identity (ID) during message exchanges (connection anonymity) in SaaS. We describe the typical/target scenario for service consumption and provide a detailed privacy assessment. This is used to identify different levels of interactions between consumers and providers, as well as to evaluate how privacy is affected. We propose a multi-layered anonymity framework, where different anonymity techniques are employed together to protect ID, location, behavior and data privacy, during each level of consumer-provider interaction. We also define two schemes for generating and managing anonymous credentials, which are used to implement the proposed framework. These schemes provide two options of connection anonymity: traceable (anonymity can be disclosed, if required) and untraceable (anonymity cannot be disclosed). The consumer and provider will be able to choose which is more suitable to their needs and regulatory environments.","PeriodicalId":214084,"journal":{"name":"2012 IEEE Fifth International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132084287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2012 IEEE Fifth International Conference on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1