首页 > 最新文献

2010 IEEE 3rd International Conference on Cloud Computing最新文献

英文 中文
FlexPRICE: Flexible Provisioning of Resources in a Cloud Environment FlexPRICE:在云环境中灵活地提供资源
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.71
T. Henzinger, Anmol V. Singh, Vasu Singh, Thomas Wies, D. Zufferey
Cloud computing aims to give users virtually unlimited pay-per-use computing resources without the burden of managing the underlying infrastructure. We claim that, in order to realize the full potential of cloud computing, the user must be presented with a pricing model that offers flexibility at the requirements level, such as a choice between different degrees of execution speed and the cloud provider must be presented with a programming model that offers flexibility at the execution level, such as a choice between different scheduling policies. In such a flexible framework, with each job, the user purchases a virtual computer with the desired speed and cost characteristics, and the cloud provider can optimize the utilization of resources across a stream of jobs from different users. We designed a flexible framework to test our hypothesis, which is called FlexPRICE (Flexible Provisioning of Resources in a Cloud Environment) and works as follows. A user presents a job to the cloud. The cloud finds different schedules to execute the job and presents a set of quotes to the user in terms of price and duration for the execution. The user then chooses a particular quote and the cloud is obliged to execute the job according to the chosen quote. FlexPRICE thus hides the complexity of the actual scheduling decisions from the user, but still provides enough flexibility to meet the users actual demands. We implemented FlexPRICE in a simulator called PRICES that allows us to experiment with our framework. We observe that FlexPRICE provides a wide range of execution options --from fast and expensive to slow and cheap-- for the whole spectrum of data-intensive and computation-intensive jobs. We also observe that the set of quotes computed by FlexPRICE do not vary as the number of simultaneous jobs increases.
云计算旨在为用户提供几乎无限的按使用付费的计算资源,而无需管理底层基础设施的负担。我们认为,为了实现云计算的全部潜力,必须向用户提供在需求级别提供灵活性的定价模型,例如在不同程度的执行速度之间进行选择;必须向云提供商提供在执行级别提供灵活性的编程模型,例如在不同调度策略之间进行选择。在这样一个灵活的框架中,对于每个作业,用户购买一台具有所需速度和成本特征的虚拟计算机,云提供商可以跨来自不同用户的作业流优化资源利用率。我们设计了一个灵活的框架来测试我们的假设,它被称为FlexPRICE(云环境中资源的灵活配置),工作原理如下。用户向云呈现作业。云找到执行作业的不同时间表,并根据执行的价格和持续时间向用户提供一组报价。然后,用户选择一个特定的报价,云有义务根据所选的报价执行任务。FlexPRICE因此向用户隐藏了实际调度决策的复杂性,但仍然提供了足够的灵活性来满足用户的实际需求。我们在一个名为PRICES的模拟器中实现了FlexPRICE,它允许我们对我们的框架进行实验。我们观察到FlexPRICE提供了广泛的执行选项,从快速和昂贵到缓慢和便宜,适用于所有数据密集型和计算密集型工作。我们还观察到,FlexPRICE计算的报价集不随同时工作数量的增加而变化。
{"title":"FlexPRICE: Flexible Provisioning of Resources in a Cloud Environment","authors":"T. Henzinger, Anmol V. Singh, Vasu Singh, Thomas Wies, D. Zufferey","doi":"10.1109/CLOUD.2010.71","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.71","url":null,"abstract":"Cloud computing aims to give users virtually unlimited pay-per-use computing resources without the burden of managing the underlying infrastructure. We claim that, in order to realize the full potential of cloud computing, the user must be presented with a pricing model that offers flexibility at the requirements level, such as a choice between different degrees of execution speed and the cloud provider must be presented with a programming model that offers flexibility at the execution level, such as a choice between different scheduling policies. In such a flexible framework, with each job, the user purchases a virtual computer with the desired speed and cost characteristics, and the cloud provider can optimize the utilization of resources across a stream of jobs from different users. We designed a flexible framework to test our hypothesis, which is called FlexPRICE (Flexible Provisioning of Resources in a Cloud Environment) and works as follows. A user presents a job to the cloud. The cloud finds different schedules to execute the job and presents a set of quotes to the user in terms of price and duration for the execution. The user then chooses a particular quote and the cloud is obliged to execute the job according to the chosen quote. FlexPRICE thus hides the complexity of the actual scheduling decisions from the user, but still provides enough flexibility to meet the users actual demands. We implemented FlexPRICE in a simulator called PRICES that allows us to experiment with our framework. We observe that FlexPRICE provides a wide range of execution options --from fast and expensive to slow and cheap-- for the whole spectrum of data-intensive and computation-intensive jobs. We also observe that the set of quotes computed by FlexPRICE do not vary as the number of simultaneous jobs increases.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132594927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Performance Measurements and Analysis of Network I/O Applications in Virtualized Cloud 虚拟化云中网络I/O应用的性能测量与分析
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.74
Yiduo Mei, Ling Liu, Xing Pu, Sankaran Sivathanu
Virtualization is a key technology for cloud based data centers to implement the vision of infrastructure as a service (IaaS) and to promote effective server consolidation and application consolidation. However, current implementation of virtual machine monitor does not provide sufficient performance isolation to guarantee the effectiveness of resource sharing, especially when the applications running on multiple virtual machines of the same physical machine are competing for computing and communication sources. In this paper, we present our performance measurement study of network I/O applications in virtualized cloud. We focus our measurement based analysis on performance impact of co-locating applications in a virtualized cloud in terms of throughput and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical host. Our results show that by strategically co-locating network I/O applications, performance improvement for cloud consumers can be as high as 34%, and the cloud providers can achieve over 40% performance gain.
虚拟化是基于云的数据中心实现基础设施即服务(IaaS)愿景和促进有效的服务器整合和应用程序整合的关键技术。然而,当前的虚拟机监控实现并没有提供足够的性能隔离来保证资源共享的有效性,特别是当运行在同一物理机的多个虚拟机上的应用程序竞争计算和通信资源时。在本文中,我们提出了我们的性能测量研究的网络I/O应用在虚拟化云。在吞吐量和资源共享有效性方面,我们将基于测量的分析重点放在虚拟化云中共存应用程序的性能影响上,包括空闲实例对在同一物理主机上并发运行的应用程序的影响。我们的结果表明,通过战略性地将网络I/O应用程序置于同一位置,云用户的性能提升可高达34%,云提供商的性能提升可超过40%。
{"title":"Performance Measurements and Analysis of Network I/O Applications in Virtualized Cloud","authors":"Yiduo Mei, Ling Liu, Xing Pu, Sankaran Sivathanu","doi":"10.1109/CLOUD.2010.74","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.74","url":null,"abstract":"Virtualization is a key technology for cloud based data centers to implement the vision of infrastructure as a service (IaaS) and to promote effective server consolidation and application consolidation. However, current implementation of virtual machine monitor does not provide sufficient performance isolation to guarantee the effectiveness of resource sharing, especially when the applications running on multiple virtual machines of the same physical machine are competing for computing and communication sources. In this paper, we present our performance measurement study of network I/O applications in virtualized cloud. We focus our measurement based analysis on performance impact of co-locating applications in a virtualized cloud in terms of throughput and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical host. Our results show that by strategically co-locating network I/O applications, performance improvement for cloud consumers can be as high as 34%, and the cloud providers can achieve over 40% performance gain.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114482331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
Metadata Partitioning for Large-Scale Distributed Storage Systems 大规模分布式存储系统的元数据分区
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.24
Jan-Jan Wu, Pangfeng Liu, Y. Chung
With the emergence of large-scale storage systems that separate metadata management from fileread/write operations, and with requests targetting metadata account for over 80% of the total number of I/O requests, metadata management has become an interesting research problem on its own. When designing a metadata server cluster, the partitioning of the metadata among the servers is of critical importance for maintaining efficient metadata operations and balanced load distribution across the cluster. We propose a dynamic programming method combined with binary search to solve the partitioning problem. With theoretical analysis and extensive experiments, we show that our algorithm finds the partitioning that minimizes load imbalance among servers and maximize efficiency of metadata operations.
随着将元数据管理与文件读写操作分离的大规模存储系统的出现,以及针对元数据的请求占I/O请求总数的80%以上,元数据管理本身已经成为一个有趣的研究问题。在设计元数据服务器集群时,在服务器之间划分元数据对于保持高效的元数据操作和跨集群均衡的负载分布至关重要。提出了一种结合二分搜索的动态规划方法来解决分区问题。通过理论分析和大量的实验,我们证明了我们的算法找到了最小化服务器之间负载不平衡和最大化元数据操作效率的分区。
{"title":"Metadata Partitioning for Large-Scale Distributed Storage Systems","authors":"Jan-Jan Wu, Pangfeng Liu, Y. Chung","doi":"10.1109/CLOUD.2010.24","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.24","url":null,"abstract":"With the emergence of large-scale storage systems that separate metadata management from fileread/write operations, and with requests targetting metadata account for over 80% of the total number of I/O requests, metadata management has become an interesting research problem on its own. When designing a metadata server cluster, the partitioning of the metadata among the servers is of critical importance for maintaining efficient metadata operations and balanced load distribution across the cluster. We propose a dynamic programming method combined with binary search to solve the partitioning problem. With theoretical analysis and extensive experiments, we show that our algorithm finds the partitioning that minimizes load imbalance among servers and maximize efficiency of metadata operations.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114711186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Autonomic Management of Cloud Service Centers with Availability Guarantees 具有可用性保证的云服务中心自治管理
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.19
B. Addis, D. Ardagna, B. Panicucci, Li Zhang
Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.
现代云基础设施生活在一个开放的世界中,其特点是环境和它们必须满足的需求不断变化。持续的变化是自主地、不可预测地发生的,它们不受云提供商的控制。因此,必须开发能够动态适应云基础设施的高级解决方案,同时提供持续的服务和性能保证。已经开发了许多自主计算解决方案,以便根据短期需求估计在运行的应用程序之间动态分配资源。然而,到目前为止,只考虑了性能和能源的权衡,而对基础设施可靠性/可用性的重视程度较低,这已被证明是早期云提供商链中最薄弱的环节。本文的目的是填补这一文献空白,为能够识别性能和能源权衡的云虚拟化环境设计资源分配策略,为云最终用户提供先验的可用性保证。
{"title":"Autonomic Management of Cloud Service Centers with Availability Guarantees","authors":"B. Addis, D. Ardagna, B. Panicucci, Li Zhang","doi":"10.1109/CLOUD.2010.19","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.19","url":null,"abstract":"Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"30 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132532123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Characterizing Cloud Federation for Enhancing Providers' Profit 表征云联盟以提高供应商利润
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.32
Íñigo Goiri, Jordi Guitart, J. Torres
Cloud federation has been proposed as a new paradigm that allows providers to avoid the limitation of owning only a restricted amount of resources, which forces them to reject new customers when they have not enough local resources to fulfill their customers' requirements. Federation allows a provider to dynamically outsource resources to other providers in response to demand variations. It also allows a provider that has underused resources to rent part of them to other providers. Both things could make the provider to get more profit when used adequately. This requires that the provider has a clear understanding of the potential of each federation decision, in order to choose the most convenient depending on the environment conditions. In this paper, we present a complete characterization of providers' federation in the Cloud, including decision equations to outsource resources to other providers, rent free resources to other providers (i.e. insourcing), or shutdown unused nodes to save power, and we characterize these decisions as a function of several parameters. Then, we demonstrate in the evaluation section how a provider can enhance its profit by using these equations to exploit federation, and how the different parameters influence which is the best decision on each situation.
云联合被提议作为一种新的范例,它允许提供商避免只拥有有限数量的资源的限制,这种限制迫使他们在没有足够的本地资源来满足客户需求时拒绝新客户。联邦允许提供商动态地将资源外包给其他提供商以响应需求变化。它还允许拥有未充分利用的资源的提供者将其中的一部分租给其他提供者。如果使用得当,这两件事都可以使提供者获得更多的利润。这要求提供者清楚地了解每个联合决策的可能性,以便根据环境条件选择最方便的决策。在本文中,我们提出了云计算中供应商联盟的完整特征,包括将资源外包给其他供应商的决策方程,将免费资源租给其他供应商(即内包),或关闭未使用的节点以节省电力,我们将这些决策描述为几个参数的函数。然后,我们在评估部分展示了供应商如何通过使用这些方程来利用联邦来提高其利润,以及不同的参数如何影响每种情况下的最佳决策。
{"title":"Characterizing Cloud Federation for Enhancing Providers' Profit","authors":"Íñigo Goiri, Jordi Guitart, J. Torres","doi":"10.1109/CLOUD.2010.32","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.32","url":null,"abstract":"Cloud federation has been proposed as a new paradigm that allows providers to avoid the limitation of owning only a restricted amount of resources, which forces them to reject new customers when they have not enough local resources to fulfill their customers' requirements. Federation allows a provider to dynamically outsource resources to other providers in response to demand variations. It also allows a provider that has underused resources to rent part of them to other providers. Both things could make the provider to get more profit when used adequately. This requires that the provider has a clear understanding of the potential of each federation decision, in order to choose the most convenient depending on the environment conditions. In this paper, we present a complete characterization of providers' federation in the Cloud, including decision equations to outsource resources to other providers, rent free resources to other providers (i.e. insourcing), or shutdown unused nodes to save power, and we characterize these decisions as a function of several parameters. Then, we demonstrate in the evaluation section how a provider can enhance its profit by using these equations to exploit federation, and how the different parameters influence which is the best decision on each situation.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133272200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 205
Adaptive Data Migration in Multi-tiered Storage Based Cloud Environment 基于多层存储的云环境下的自适应数据迁移
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.60
Gong Zhang, Lawrence Chiu, Ling Liu
Multi-tiered storage systems today are integrating Solid State Disks (SSD) on top of traditional rotational hard disks for performance enhancement due to the significant IO improvements in SSD technology. It is widely recognized that automated data migration between SSD and HDD plays a critical role in effective integration of SSD into multi-tiered storage systems. Furthermore, effective data migration has to take into account of application specific workload characteristics, deadlines, and IO profiles. An important and interesting challenge for automated data migration in multi-tiered storage systems is how to fully release the power of data migration while guaranteeing the migration deadline is critical to maximizing the performance of SSD-enabled multi-tiered storage system. In this paper, we present an adaptive look ahead data migration model that can incorporate application specific characteristics and I/O profiles as well as workload deadlines. Our adaptive data migration model has three unique features. First, it incorporates a set of key factors that may impact on the performance of look ahead migration efficiency in our formal model develop. Second, our data migration model can adaptively determine the optimal look ahead window size, based on several parameters, to optimize the effectiveness of look ahead migration. Third, we formally and experimentally show that the adaptive data migration model can improve overall system performance and resource utilization while meeting workload deadlines. Through our trace driven experimental study, we compare the adaptive look ahead migration approach with the basic migration model and show that the adaptive migration model is effective and efficient for continuously improving and tuning of the performance and scalability of multi-tier storage systems.
由于SSD技术在IO方面的显著改进,如今的多层存储系统在传统的旋转硬盘之上集成了固态硬盘(SSD),以提高性能。SSD和HDD之间的数据自动迁移是实现SSD与多层存储系统有效集成的关键。此外,有效的数据迁移必须考虑特定于应用程序的工作负载特征、截止日期和IO配置文件。如何在保证迁移期限的同时充分释放数据迁移的力量,是实现支持ssd的多层存储系统性能最大化的关键,这是在多层存储系统中实现自动数据迁移的一个重要而有趣的挑战。在本文中,我们提出了一个自适应的前瞻性数据迁移模型,该模型可以结合应用程序特定的特征和I/O配置文件以及工作负载截止日期。我们的自适应数据迁移模型有三个独特的特性。首先,在我们的正式模型开发中,它包含了一组可能影响前瞻性迁移效率性能的关键因素。其次,我们的数据迁移模型可以根据多个参数自适应确定最优的预判窗口大小,以优化预判迁移的有效性。第三,我们正式和实验表明,自适应数据迁移模型可以在满足工作负载期限的同时提高整体系统性能和资源利用率。通过跟踪驱动的实验研究,将自适应前瞻迁移方法与基本迁移模型进行了比较,结果表明自适应迁移模型对于多层存储系统的性能和可扩展性的持续改进和调优是有效的。
{"title":"Adaptive Data Migration in Multi-tiered Storage Based Cloud Environment","authors":"Gong Zhang, Lawrence Chiu, Ling Liu","doi":"10.1109/CLOUD.2010.60","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.60","url":null,"abstract":"Multi-tiered storage systems today are integrating Solid State Disks (SSD) on top of traditional rotational hard disks for performance enhancement due to the significant IO improvements in SSD technology. It is widely recognized that automated data migration between SSD and HDD plays a critical role in effective integration of SSD into multi-tiered storage systems. Furthermore, effective data migration has to take into account of application specific workload characteristics, deadlines, and IO profiles. An important and interesting challenge for automated data migration in multi-tiered storage systems is how to fully release the power of data migration while guaranteeing the migration deadline is critical to maximizing the performance of SSD-enabled multi-tiered storage system. In this paper, we present an adaptive look ahead data migration model that can incorporate application specific characteristics and I/O profiles as well as workload deadlines. Our adaptive data migration model has three unique features. First, it incorporates a set of key factors that may impact on the performance of look ahead migration efficiency in our formal model develop. Second, our data migration model can adaptively determine the optimal look ahead window size, based on several parameters, to optimize the effectiveness of look ahead migration. Third, we formally and experimentally show that the adaptive data migration model can improve overall system performance and resource utilization while meeting workload deadlines. Through our trace driven experimental study, we compare the adaptive look ahead migration approach with the basic migration model and show that the adaptive migration model is effective and efficient for continuously improving and tuning of the performance and scalability of multi-tier storage systems.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114245667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
IRain: A Personal Storage Cloud for Integrating Web Data Services IRain:用于集成Web数据服务的个人存储云
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.43
Jiangning Cui, Taoying Liu, Qian Chen, Hong Liu
In this paper, we design and implement IRain, a prototype of storage cloud for computer scientists and graduate students to manage personal data that spreads over the web. IRain (1) integrates personal data with various metainfo structures that comes from different web sites and personal computers; (2) provides a global, unified environment to users and supports user-defined file views via flexible combination of tags; (3) offers an easy way to integrate new web services via a VFS-liked interface.
在本文中,我们设计并实现了IRain,这是一个存储云的原型,用于计算机科学家和研究生管理在网络上传播的个人数据。IRain(1)将个人数据与来自不同网站和个人计算机的各种元结构进行整合;(2)为用户提供全局统一的环境,通过标签的灵活组合,支持用户自定义文件视图;(3)提供了一种简单的方法,通过类似vfs的接口集成新的web服务。
{"title":"IRain: A Personal Storage Cloud for Integrating Web Data Services","authors":"Jiangning Cui, Taoying Liu, Qian Chen, Hong Liu","doi":"10.1109/CLOUD.2010.43","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.43","url":null,"abstract":"In this paper, we design and implement IRain, a prototype of storage cloud for computer scientists and graduate students to manage personal data that spreads over the web. IRain (1) integrates personal data with various metainfo structures that comes from different web sites and personal computers; (2) provides a global, unified environment to users and supports user-defined file views via flexible combination of tags; (3) offers an easy way to integrate new web services via a VFS-liked interface.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125363644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Architecture for a Mashup Container in Virtualized Environments 虚拟化环境中Mashup容器的体系结构
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.34
Michele Stecca, M. Maresca
This paper presents the architecture and the organization of a Mashup Container that supports the deployment and the execution of Event Driven Mashups (i.e., Composite Services in which the Services interact through events rather than through the classical Call-Response paradigm) following the Platform as a Service model in the Cloud Computing paradigm. We describe the two main modules of the container, namely the Deployment Module and the Service Execution Platform, and focus our attention on the performance on of the latter. In particular we discuss the results of an evaluation test that we run in a virtualized environment (VMware based) supporting scalability and fault tolerance.
本文介绍了Mashup容器的体系结构和组织,该容器支持事件驱动Mashup(即复合服务,其中服务通过事件而不是通过经典的调用-响应范式进行交互)的部署和执行,遵循云计算范式中的平台即服务模型。我们描述了容器的两个主要模块,即部署模块和服务执行平台,并将重点放在后者的性能上。我们特别讨论了在支持可伸缩性和容错性的虚拟化环境(基于VMware)中运行的评估测试的结果。
{"title":"An Architecture for a Mashup Container in Virtualized Environments","authors":"Michele Stecca, M. Maresca","doi":"10.1109/CLOUD.2010.34","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.34","url":null,"abstract":"This paper presents the architecture and the organization of a Mashup Container that supports the deployment and the execution of Event Driven Mashups (i.e., Composite Services in which the Services interact through events rather than through the classical Call-Response paradigm) following the Platform as a Service model in the Cloud Computing paradigm. We describe the two main modules of the container, namely the Deployment Module and the Service Execution Platform, and focus our attention on the performance on of the latter. In particular we discuss the results of an evaluation test that we run in a virtualized environment (VMware based) supporting scalability and fault tolerance.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117194217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A Runtime Model Based Monitoring Approach for Cloud 一种基于运行时模型的云监控方法
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.31
Jin Shao, Hao Wei, Qianxiang Wang, Hong Mei
Monitoring plays a significant role in improving the quality of service in cloud computing. It helps clouds to scale resource utilization adaptively, to identify defects in services for service developers, and to discover usage patterns of numerous end users. However, due to the heterogeneity of components in clouds and the complexity arising from the wealth of runtime information, monitoring in clouds faces many new challenges. In this paper, we propose a runtime model for cloud monitoring (RMCM), which denotes an intuitive representation of a running cloud by focusing on common monitoring concerns. Raw monitoring data gathered by multiple monitoring techniques are organized by RMCM to present a more intuitive profile of a running cloud. We applied RMCM in the implementation of a flexible monitoring framework, which can achieve a balance between runtime overhead and monitoring capability via adaptive management of monitoring facilities. Our experience of utilizing the monitoring framework on a real cloud demonstrates the feasibility and effectiveness of our approach.
监控在提高云计算服务质量方面发挥着重要作用。它帮助云自适应地扩展资源利用率,为服务开发人员识别服务中的缺陷,并发现众多最终用户的使用模式。然而,由于云中组件的异构性以及大量运行时信息带来的复杂性,云中的监控面临着许多新的挑战。在本文中,我们提出了一个用于云监控的运行时模型(RMCM),该模型通过关注常见的监控关注点来表示运行中的云的直观表示。由多种监视技术收集的原始监视数据由RMCM组织,以提供运行中的云的更直观的概要文件。我们将RMCM应用于灵活的监视框架的实现中,该框架可以通过对监视设施的自适应管理来实现运行时开销和监视功能之间的平衡。我们在真实云中使用监控框架的经验证明了我们方法的可行性和有效性。
{"title":"A Runtime Model Based Monitoring Approach for Cloud","authors":"Jin Shao, Hao Wei, Qianxiang Wang, Hong Mei","doi":"10.1109/CLOUD.2010.31","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.31","url":null,"abstract":"Monitoring plays a significant role in improving the quality of service in cloud computing. It helps clouds to scale resource utilization adaptively, to identify defects in services for service developers, and to discover usage patterns of numerous end users. However, due to the heterogeneity of components in clouds and the complexity arising from the wealth of runtime information, monitoring in clouds faces many new challenges. In this paper, we propose a runtime model for cloud monitoring (RMCM), which denotes an intuitive representation of a running cloud by focusing on common monitoring concerns. Raw monitoring data gathered by multiple monitoring techniques are organized by RMCM to present a more intuitive profile of a running cloud. We applied RMCM in the implementation of a flexible monitoring framework, which can achieve a balance between runtime overhead and monitoring capability via adaptive management of monitoring facilities. Our experience of utilizing the monitoring framework on a real cloud demonstrates the feasibility and effectiveness of our approach.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126572562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 132
Fault Tolerance Middleware for Cloud Computing 云计算容错中间件
Pub Date : 2010-07-05 DOI: 10.1109/CLOUD.2010.26
Wenbing Zhao, P. Melliar-Smith, L. Moser
The Low Latency Fault Tolerance (LLFT) middleware provides fault tolerance for distributed applications deployed within a cloud computing or data center environment, using the leader/follower replication approach. The LLFT middleware consists of a Low Latency Messaging Protocol, a Leader-Determined Membership Protocol, and a Virtual Determinizer Framework. The Messaging Protocol provides are liable, totally ordered message delivery service by employing a direct group-to-group multicast where the ordering is determined by the primary replica in the group. The Membership Protocol provides a fast reconfiguration and recovery service when a replica becomes faulty and when a replica joins or leaves a group. The Virtual Determinizer Framework captures ordering information at the primary replica and enforces the same ordering at the backup replicas for major sources of non-determinism. The LLFT middleware maintains strong replica consistency, offers application transparency, and achieves low end-to-end latency.
低延迟容错(Low Latency Fault Tolerance, LLFT)中间件使用leader/follower复制方法,为部署在云计算或数据中心环境中的分布式应用程序提供容错。LLFT中间件由低延迟消息协议、领导者确定的成员协议和虚拟确定器框架组成。消息传递协议通过使用直接组对组多播提供可靠的、完全有序的消息传递服务,其中顺序由组中的主副本决定。当副本发生故障、副本加入或离开组时,成员协议提供快速的重新配置和恢复服务。虚拟确定器框架捕获主副本上的排序信息,并对主要的非确定性源在备份副本上强制执行相同的排序。LLFT中间件保持强大的副本一致性,提供应用程序透明性,并实现低端到端延迟。
{"title":"Fault Tolerance Middleware for Cloud Computing","authors":"Wenbing Zhao, P. Melliar-Smith, L. Moser","doi":"10.1109/CLOUD.2010.26","DOIUrl":"https://doi.org/10.1109/CLOUD.2010.26","url":null,"abstract":"The Low Latency Fault Tolerance (LLFT) middleware provides fault tolerance for distributed applications deployed within a cloud computing or data center environment, using the leader/follower replication approach. The LLFT middleware consists of a Low Latency Messaging Protocol, a Leader-Determined Membership Protocol, and a Virtual Determinizer Framework. The Messaging Protocol provides are liable, totally ordered message delivery service by employing a direct group-to-group multicast where the ordering is determined by the primary replica in the group. The Membership Protocol provides a fast reconfiguration and recovery service when a replica becomes faulty and when a replica joins or leaves a group. The Virtual Determinizer Framework captures ordering information at the primary replica and enforces the same ordering at the backup replicas for major sources of non-determinism. The LLFT middleware maintains strong replica consistency, offers application transparency, and achieves low end-to-end latency.","PeriodicalId":375404,"journal":{"name":"2010 IEEE 3rd International Conference on Cloud Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127505801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 179
期刊
2010 IEEE 3rd International Conference on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1