首页 > 最新文献

2014 IEEE 7th International Conference on Cloud Computing最新文献

英文 中文
Mixed-Tenancy in the Wild - Applicability of Mixed-Tenancy for Real-World Enterprise SaaS-Applications 野外混合租赁——混合租赁在实际企业saas应用程序中的适用性
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.119
S. Ruehl, Malte Rupprecht, Bjorn Morr, Matthias Reinhardt, S. Verclas
Software-as-a-Service (SaaS) is a delivery model whose basic idea is to provide applications to the customer on demand over the Internet. SaaS thereby promotes multi-tenancy as a tool to exploit economies of scale. This means that a single application instance serves multiple customers. However, a major drawback of SaaS is the customers' hesitation of sharing infrastructure, application code, or data with other tenants. This is due to the fact that one of the major threats of multi-tenancy is information disclosure due to a system malfunction, system error, or aggressive actions. So far the only approach in research to counteract on this hesitation has been to enhance the isolation between tenants using the same instance. Our approach (presented in earlier work) tackles this hesitation differently. It allows customers to choose if or even with whom they want to share the application. The approach enables the customer to define their constraints for individual application components and the underlying infrastructure. The contribution of this paper is an analysis of real-world applicability of the mixed-tenancy approach. This is done experimentally by applying the mixed-tenancy approach to OpenERP, an open source enterprise resource planning system used in industry. The conclusion gained from this experiment is that the mixed-tenancy approach is technically realizable for cases of the real world. However, there are scenarios where the mixed-tenancy approach is not economically worthwhile for the operator.
软件即服务(SaaS)是一种交付模型,其基本思想是通过Internet按需向客户提供应用程序。因此,SaaS将多租户作为一种利用规模经济的工具来推广。这意味着单个应用程序实例为多个客户提供服务。然而,SaaS的一个主要缺点是客户在与其他租户共享基础设施、应用程序代码或数据时的犹豫。这是因为多租户的主要威胁之一是由于系统故障、系统错误或攻击性操作导致的信息泄露。到目前为止,研究中消除这种犹豫的唯一方法是增强使用同一实例的租户之间的隔离。我们的方法(在早期的工作中提出)以不同的方式解决了这种犹豫。它允许客户选择是否或甚至与谁共享应用程序。该方法使客户能够为单个应用程序组件和底层基础设施定义约束。本文的贡献在于分析了混合租赁方法在现实世界中的适用性。这是通过将混合租赁方法应用于OpenERP(一种在工业中使用的开源企业资源规划系统)来实验性地完成的。从这个实验中得到的结论是,混合租赁方法在技术上是可以实现的,适用于现实世界的情况。然而,在某些情况下,混合租赁方法对运营商来说在经济上是不值得的。
{"title":"Mixed-Tenancy in the Wild - Applicability of Mixed-Tenancy for Real-World Enterprise SaaS-Applications","authors":"S. Ruehl, Malte Rupprecht, Bjorn Morr, Matthias Reinhardt, S. Verclas","doi":"10.1109/CLOUD.2014.119","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.119","url":null,"abstract":"Software-as-a-Service (SaaS) is a delivery model whose basic idea is to provide applications to the customer on demand over the Internet. SaaS thereby promotes multi-tenancy as a tool to exploit economies of scale. This means that a single application instance serves multiple customers. However, a major drawback of SaaS is the customers' hesitation of sharing infrastructure, application code, or data with other tenants. This is due to the fact that one of the major threats of multi-tenancy is information disclosure due to a system malfunction, system error, or aggressive actions. So far the only approach in research to counteract on this hesitation has been to enhance the isolation between tenants using the same instance. Our approach (presented in earlier work) tackles this hesitation differently. It allows customers to choose if or even with whom they want to share the application. The approach enables the customer to define their constraints for individual application components and the underlying infrastructure. The contribution of this paper is an analysis of real-world applicability of the mixed-tenancy approach. This is done experimentally by applying the mixed-tenancy approach to OpenERP, an open source enterprise resource planning system used in industry. The conclusion gained from this experiment is that the mixed-tenancy approach is technically realizable for cases of the real world. However, there are scenarios where the mixed-tenancy approach is not economically worthwhile for the operator.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130767523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Model Driven Framework for Secure Outsourcing of Computation to the Cloud 一个模型驱动的计算安全外包到云的框架
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.145
M. Nassar, A. Erradi, Farida Sabry, Q. Malluhi
This paper presents a model driven approach to define then coordinate the execution of protocols for secure outsourcing of computation of large datasets in cloud computing environments. First we present our Outsourcing Protocol Definition Language (OPDL) used to define a machine-processable protocols in an abstract and declarative way while leaving the implementation details to the underlying runtime components. The proposed language aims to simplify the design of these protocols while allowing their verification and the generation of cloud services composition to coordinate the protocol execution. We evaluated the expressiveness of OPDL by using it to define a set of representative secure outsourcing protocols from the literature.
本文提出了一种模型驱动的方法来定义和协调云计算环境中大型数据集计算安全外包协议的执行。首先,我们介绍了外包协议定义语言(OPDL),该语言用于以抽象和声明的方式定义机器可处理的协议,而将实现细节留给底层运行时组件。提议的语言旨在简化这些协议的设计,同时允许对其进行验证并生成云服务组合以协调协议的执行。我们通过使用OPDL从文献中定义一组具有代表性的安全外包协议来评估OPDL的表达性。
{"title":"A Model Driven Framework for Secure Outsourcing of Computation to the Cloud","authors":"M. Nassar, A. Erradi, Farida Sabry, Q. Malluhi","doi":"10.1109/CLOUD.2014.145","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.145","url":null,"abstract":"This paper presents a model driven approach to define then coordinate the execution of protocols for secure outsourcing of computation of large datasets in cloud computing environments. First we present our Outsourcing Protocol Definition Language (OPDL) used to define a machine-processable protocols in an abstract and declarative way while leaving the implementation details to the underlying runtime components. The proposed language aims to simplify the design of these protocols while allowing their verification and the generation of cloud services composition to coordinate the protocol execution. We evaluated the expressiveness of OPDL by using it to define a set of representative secure outsourcing protocols from the literature.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128316757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Evolving Big Data Stream Classification with MapReduce 用MapReduce发展大数据流分类
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.82
Ahsanul Haque, Brandon Parker, L. Khan, B. Thuraisingham
Big Data Stream mining has some inherent challenges which are not present in traditional data mining. Not only Big Data Stream receives large volume of data continuously, but also it may have different types of features. Moreover, the concepts and features tend to evolve throughout the stream. Traditional data mining techniques are not sufficient to address these challenges. In our current work, we have designed a multi-tiered ensemble based method HSMiner to address aforementioned challenges to label instances in an evolving Big Data Stream. However, this method requires building large number of AdaBoost ensembles for each of the numeric features after receiving each new data chunk which is very costly. Thus, HSMiner may face scalability issue in case of classifying Big Data Stream. To address this problem, we propose three approaches to build these large number of AdaBoost ensembles using MapReduce based parallelism. We compare each of these approaches from different aspects of design. We also empirically show that, these approaches are very useful for our base method to achieve significant scalability and speedup.
大数据流挖掘具有传统数据挖掘所不具备的一些固有挑战。大数据流不仅连续接收大量数据,而且可能具有不同类型的特征。此外,概念和特性倾向于在整个流程中不断发展。传统的数据挖掘技术不足以应对这些挑战。在我们目前的工作中,我们设计了一个基于多层集成的方法HSMiner,以解决上述在不断发展的大数据流中标记实例的挑战。然而,这种方法需要在接收到每个新数据块后为每个数字特征构建大量的AdaBoost集成,这是非常昂贵的。因此,在对大数据流进行分类的情况下,HSMiner可能面临可扩展性问题。为了解决这个问题,我们提出了三种使用基于MapReduce的并行性来构建这些大量AdaBoost集成的方法。我们从设计的不同方面来比较这些方法。经验还表明,这些方法对我们的基本方法非常有用,可以实现显著的可扩展性和加速。
{"title":"Evolving Big Data Stream Classification with MapReduce","authors":"Ahsanul Haque, Brandon Parker, L. Khan, B. Thuraisingham","doi":"10.1109/CLOUD.2014.82","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.82","url":null,"abstract":"Big Data Stream mining has some inherent challenges which are not present in traditional data mining. Not only Big Data Stream receives large volume of data continuously, but also it may have different types of features. Moreover, the concepts and features tend to evolve throughout the stream. Traditional data mining techniques are not sufficient to address these challenges. In our current work, we have designed a multi-tiered ensemble based method HSMiner to address aforementioned challenges to label instances in an evolving Big Data Stream. However, this method requires building large number of AdaBoost ensembles for each of the numeric features after receiving each new data chunk which is very costly. Thus, HSMiner may face scalability issue in case of classifying Big Data Stream. To address this problem, we propose three approaches to build these large number of AdaBoost ensembles using MapReduce based parallelism. We compare each of these approaches from different aspects of design. We also empirically show that, these approaches are very useful for our base method to achieve significant scalability and speedup.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"99 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132871433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Evaluating Dynamic Resource Allocation Strategies in Virtualized Data Centers 评估虚拟化数据中心中的动态资源分配策略
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.52
A. Wolke, Lukas Ziegler
Virtualization technology allows a dynamic allocation of VMs to servers. It reduces server demand and increases energy efficiency of data centers. Dynamic control strategies migrate VMs between servers in dependence to their actual workload. A concept that promises further improvements in VM allocation efficiency. In this paper we evaluate the applicability of DSAP in a deterministic environment. DSAP is a linear program, calculating VM allocations and live-migrations on workload patterns known a priori. Efficiency is evaluated by simulations as well as an experimental test bed infrastructure. Results are compared against alternative control approaches that we studied in preliminary works. Our findings are, dynamic allocation can reduce server demand at a reasonable service quality. Countermeasures are required to keep the number of live-migrations under control.
虚拟化技术支持将虚拟机动态分配给服务器。减少服务器需求,提高数据中心的能源效率。动态控制策略在依赖于实际工作负载的服务器之间迁移虚拟机。这个概念承诺进一步提高虚拟机分配效率。本文评估了DSAP在确定性环境中的适用性。DSAP是一个线性程序,根据已知的先验工作负载模式计算VM分配和实时迁移。通过仿真和实验试验台基础设施对效率进行了评估。结果与我们在初步工作中研究的其他控制方法进行了比较。我们的研究结果是,动态分配可以在合理的服务质量下减少服务器需求。需要采取对策来控制活移民的数量。
{"title":"Evaluating Dynamic Resource Allocation Strategies in Virtualized Data Centers","authors":"A. Wolke, Lukas Ziegler","doi":"10.1109/CLOUD.2014.52","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.52","url":null,"abstract":"Virtualization technology allows a dynamic allocation of VMs to servers. It reduces server demand and increases energy efficiency of data centers. Dynamic control strategies migrate VMs between servers in dependence to their actual workload. A concept that promises further improvements in VM allocation efficiency. In this paper we evaluate the applicability of DSAP in a deterministic environment. DSAP is a linear program, calculating VM allocations and live-migrations on workload patterns known a priori. Efficiency is evaluated by simulations as well as an experimental test bed infrastructure. Results are compared against alternative control approaches that we studied in preliminary works. Our findings are, dynamic allocation can reduce server demand at a reasonable service quality. Countermeasures are required to keep the number of live-migrations under control.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132290885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
FRESH: Fair and Efficient Slot Configuration and Scheduling for Hadoop Clusters FRESH:公平有效的Hadoop集群槽位配置和调度
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.106
Jiayin Wang, Yi Yao, Ying Mao, B. Sheng, N. Mi
Hadoop is an emerging framework for parallel big data processing. While becoming popular, Hadoop is too complex for regular users to fully understand all the system parameters and tune them appropriately. Especially when processing a batch of jobs, default Hadoop setting may cause inefficient resource utilization and unnecessarily prolong the execution time. This paper considers an extremely important setting of slot configuration which by default is fixed and static. We proposed an enhanced Hadoop system called FRESH which can derive the best slot setting, dynamically configure slots, and appropriately assign tasks to the available slots. The experimental results show that when serving a batch of MapReduce jobs, FRESH significantly improves the makespan as well as the fairness among jobs.
Hadoop是一个新兴的并行大数据处理框架。虽然Hadoop变得越来越流行,但对于普通用户来说,它太复杂了,无法完全理解所有的系统参数并进行适当的调优。特别是在处理批量作业时,默认的Hadoop设置可能会导致资源利用率低下,不必要地延长执行时间。本文考虑了一个非常重要的槽位配置设置,默认情况下槽位配置是固定的和静态的。我们提出了一种名为FRESH的增强Hadoop系统,它可以导出最佳槽位设置,动态配置槽位,并适当地将任务分配到可用的槽位。实验结果表明,FRESH在处理一批MapReduce作业时,显著提高了作业之间的完工时间和公平性。
{"title":"FRESH: Fair and Efficient Slot Configuration and Scheduling for Hadoop Clusters","authors":"Jiayin Wang, Yi Yao, Ying Mao, B. Sheng, N. Mi","doi":"10.1109/CLOUD.2014.106","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.106","url":null,"abstract":"Hadoop is an emerging framework for parallel big data processing. While becoming popular, Hadoop is too complex for regular users to fully understand all the system parameters and tune them appropriately. Especially when processing a batch of jobs, default Hadoop setting may cause inefficient resource utilization and unnecessarily prolong the execution time. This paper considers an extremely important setting of slot configuration which by default is fixed and static. We proposed an enhanced Hadoop system called FRESH which can derive the best slot setting, dynamically configure slots, and appropriately assign tasks to the available slots. The experimental results show that when serving a batch of MapReduce jobs, FRESH significantly improves the makespan as well as the fairness among jobs.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131601312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Fast Live Migration with Small IO Performance Penalty by Exploiting SAN in Parallel 通过并行利用SAN,快速实时迁移和小IO性能损失
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.16
Soramichi Akiyama, Takahiro Hirofuchi, Ryousei Takano, S. Honiden
Virtualization techniques greatly benefit cloud computing. Live migration enables a datacenter to dynamically replace virtual machines (VMs) without disrupting services running on them. Efficient live migration is the key to improve the energy efficiency and resource utilization of a datacenter through dynamic placement of VMs. Recent studies have achieved efficient live migration by deleting the page cache of the guest OS to shrink the memory size of it before a migration. However, these studies do not solve the problem of IO performance penalty after a migration due to the loss of page cache. We propose an advanced memory transfer mechanism for live migration, which skips transferring the page cache to shorten total migration time while restoring it transparently from the guest OS via the SAN to prevent IO performance penalty. To start a migration, our mechanism collects the mapping information between page cache and disk blocks. During a migration, the source host skips transferring the page cache but transfers other memory content, while the destination host transfers the same data as the page cache from the disk blocks via the SAN. Experiments with web server and database workloads showed that our mechanism reduced total migration time with significantly small IO performance penalty.
虚拟化技术极大地有利于云计算。动态迁移使数据中心能够动态地替换虚拟机,而不会中断虚拟机上运行的业务。高效的实时迁移是通过动态放置虚拟机来提高数据中心能源效率和资源利用率的关键。最近的研究通过在迁移之前删除客户操作系统的页面缓存来缩小其内存大小,从而实现了高效的实时迁移。然而,这些研究并没有解决迁移后由于页面缓存丢失而导致的IO性能损失问题。我们提出了一种用于实时迁移的高级内存传输机制,它跳过传输页面缓存以缩短总迁移时间,同时通过SAN透明地从客户机操作系统恢复页面缓存,以防止IO性能损失。要开始迁移,我们的机制收集页缓存和磁盘块之间的映射信息。在迁移过程中,源主机跳过传输页缓存,而是传输其他内存内容,而目标主机通过SAN从磁盘块传输与页缓存相同的数据。对web服务器和数据库工作负载的实验表明,我们的机制减少了总迁移时间,并且IO性能损失很小。
{"title":"Fast Live Migration with Small IO Performance Penalty by Exploiting SAN in Parallel","authors":"Soramichi Akiyama, Takahiro Hirofuchi, Ryousei Takano, S. Honiden","doi":"10.1109/CLOUD.2014.16","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.16","url":null,"abstract":"Virtualization techniques greatly benefit cloud computing. Live migration enables a datacenter to dynamically replace virtual machines (VMs) without disrupting services running on them. Efficient live migration is the key to improve the energy efficiency and resource utilization of a datacenter through dynamic placement of VMs. Recent studies have achieved efficient live migration by deleting the page cache of the guest OS to shrink the memory size of it before a migration. However, these studies do not solve the problem of IO performance penalty after a migration due to the loss of page cache. We propose an advanced memory transfer mechanism for live migration, which skips transferring the page cache to shorten total migration time while restoring it transparently from the guest OS via the SAN to prevent IO performance penalty. To start a migration, our mechanism collects the mapping information between page cache and disk blocks. During a migration, the source host skips transferring the page cache but transfers other memory content, while the destination host transfers the same data as the page cache from the disk blocks via the SAN. Experiments with web server and database workloads showed that our mechanism reduced total migration time with significantly small IO performance penalty.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual Machine Placement in Predictable Computing Clouds 可预测计算云中的虚拟机布局
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.148
R. Rauscher, R. Acharya
Literature about cloud computing often makes the assumption that the resource demands of computing clouds (and the virtual machines that constitute them) are unpredictable in the short term. There are, however, specific use cases where resource demands can be anticipated. This paper discusses dissertation work-in-progress which shows that, in certain predictable environments, preemptive virtual machine migration can improve both computational resource utilization and the overall user experience. A novel algorithm which reacts to anticipated future resource demands based on past behavior of virtual machines is presented. Simulations are used to quantify performance improvements.
关于云计算的文献通常假设计算云(以及构成它们的虚拟机)的资源需求在短期内是不可预测的。然而,有一些特定的用例可以预测资源需求。本文讨论了正在进行的论文工作,结果表明,在某些可预测的环境中,抢占式虚拟机迁移可以提高计算资源利用率和整体用户体验。提出了一种基于虚拟机过去行为来预测未来资源需求的算法。模拟用于量化性能改进。
{"title":"Virtual Machine Placement in Predictable Computing Clouds","authors":"R. Rauscher, R. Acharya","doi":"10.1109/CLOUD.2014.148","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.148","url":null,"abstract":"Literature about cloud computing often makes the assumption that the resource demands of computing clouds (and the virtual machines that constitute them) are unpredictable in the short term. There are, however, specific use cases where resource demands can be anticipated. This paper discusses dissertation work-in-progress which shows that, in certain predictable environments, preemptive virtual machine migration can improve both computational resource utilization and the overall user experience. A novel algorithm which reacts to anticipated future resource demands based on past behavior of virtual machines is presented. Simulations are used to quantify performance improvements.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128707563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
IO Performance Interference among Consolidated n-Tier Applications: Sharing Is Better Than Isolation for Disks n层合并应用间IO性能干扰:共享优于隔离
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.14
Chien-An Lai, Qingyang Wang, Joshua Kimball, Jack Li, Junhee Park, C. Pu
The performance unpredictability associated with migrating applications into cloud computing infrastructures has impeded this migration. For example, CPU contention between co-located applications has been shown to exhibit counter-intuitive behavior. In this paper, we investigate IO performance interference through the experimental study of consolidated n-tier applications leveraging the same disk. Surprisingly, we found that specifying a specific disk allocation, e.g., limiting the number of Input/Output Operations Per Second (IOPs) per VM, results in significantly lower performance than fully sharing disk across VMs. Moreover, we observe severe performance interference among VMs can not be totally eliminated even with a sharing strategy (e.g., response times for constant workloads still increase over 1,100%). By using a micro-benchmark (Filebench) and an n-tier application benchmark systems (RUBBoS), we demonstrate the existence of disk contention in consolidated environments, and how performance loss occurs when co-located database systems in order to maintain database consistency flush their logs from memory to disk. Potential solutions to these isolation issues are (1) to increase the log buffer size to amortize the disk IO cost (2) to decrease the number of write threads to alleviate disk contention. We validate these methods experimentally and find a 64% and 57% reduction in response time (or more generally, a reduction in performance interference) for constant and increasing workloads respectively.
与将应用程序迁移到云计算基础设施相关的性能不可预测性阻碍了这种迁移。例如,共存的应用程序之间的CPU争用已经显示出违反直觉的行为。在本文中,我们通过利用相同磁盘的合并n层应用程序的实验研究来研究IO性能干扰。令人惊讶的是,我们发现指定一个特定的磁盘分配,例如,限制每个虚拟机每秒输入/输出操作(IOPs)的数量,导致性能明显低于跨虚拟机完全共享磁盘。此外,我们观察到vm之间严重的性能干扰即使使用共享策略也不能完全消除(例如,恒定工作负载的响应时间仍然增加超过1100%)。通过使用微基准测试(Filebench)和n层应用程序基准测试系统(RUBBoS),我们演示了合并环境中存在的磁盘争用,以及当位于同一位置的数据库系统为了维护数据库一致性而将其日志从内存刷新到磁盘时,性能损失是如何发生的。这些隔离问题的潜在解决方案有:(1)增加日志缓冲区大小以分摊磁盘IO成本(2)减少写线程的数量以缓解磁盘争用。我们通过实验验证了这些方法,发现对于恒定和增加的工作负载,响应时间分别减少了64%和57%(或者更一般地说,减少了性能干扰)。
{"title":"IO Performance Interference among Consolidated n-Tier Applications: Sharing Is Better Than Isolation for Disks","authors":"Chien-An Lai, Qingyang Wang, Joshua Kimball, Jack Li, Junhee Park, C. Pu","doi":"10.1109/CLOUD.2014.14","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.14","url":null,"abstract":"The performance unpredictability associated with migrating applications into cloud computing infrastructures has impeded this migration. For example, CPU contention between co-located applications has been shown to exhibit counter-intuitive behavior. In this paper, we investigate IO performance interference through the experimental study of consolidated n-tier applications leveraging the same disk. Surprisingly, we found that specifying a specific disk allocation, e.g., limiting the number of Input/Output Operations Per Second (IOPs) per VM, results in significantly lower performance than fully sharing disk across VMs. Moreover, we observe severe performance interference among VMs can not be totally eliminated even with a sharing strategy (e.g., response times for constant workloads still increase over 1,100%). By using a micro-benchmark (Filebench) and an n-tier application benchmark systems (RUBBoS), we demonstrate the existence of disk contention in consolidated environments, and how performance loss occurs when co-located database systems in order to maintain database consistency flush their logs from memory to disk. Potential solutions to these isolation issues are (1) to increase the log buffer size to amortize the disk IO cost (2) to decrease the number of write threads to alleviate disk contention. We validate these methods experimentally and find a 64% and 57% reduction in response time (or more generally, a reduction in performance interference) for constant and increasing workloads respectively.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"398 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120892128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Ensuring High-Performance of Mission-Critical Java Applications in Multi-tenant Cloud Platforms 确保关键任务Java应用在多租户云平台上的高性能
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.88
Zhenyun Zhuang, C. Tran, H. Ramachandra, B. Sridharan
Cloud Computing promises a cost-effective and administration-effective solution to the traditional needs of computing resources. While bringing efficiency to the users thanks to the shared hardware and software, the multi-tenency characteristics also bring unique challenges to the backend cloud platforms. In particular, the JVM mechanisms used by Java applications, coupled with OS-level features, give rise to a set of problems that are not present in other deployment scenarios. In this work, we consider the problem of ensuring high-performance of mission-critical Java applications in multi-tenant cloud environments. Based on our experiences with Linkedin's platforms, we identify and solve a set of problems caused by multi-tenancy. We share the lessons and knowledge we learned during the course.
云计算承诺为传统的计算资源需求提供一种具有成本效益和管理效率的解决方案。硬件和软件的共享为用户带来效率的同时,多租户的特点也给后端云平台带来了独特的挑战。特别是,Java应用程序使用的JVM机制,加上操作系统级别的特性,会产生一组在其他部署场景中不存在的问题。在这项工作中,我们考虑了在多租户云环境中确保关键任务Java应用程序的高性能的问题。根据我们在Linkedin平台上的经验,我们发现并解决了由多租户引起的一系列问题。我们分享我们在课程中学到的课程和知识。
{"title":"Ensuring High-Performance of Mission-Critical Java Applications in Multi-tenant Cloud Platforms","authors":"Zhenyun Zhuang, C. Tran, H. Ramachandra, B. Sridharan","doi":"10.1109/CLOUD.2014.88","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.88","url":null,"abstract":"Cloud Computing promises a cost-effective and administration-effective solution to the traditional needs of computing resources. While bringing efficiency to the users thanks to the shared hardware and software, the multi-tenency characteristics also bring unique challenges to the backend cloud platforms. In particular, the JVM mechanisms used by Java applications, coupled with OS-level features, give rise to a set of problems that are not present in other deployment scenarios. In this work, we consider the problem of ensuring high-performance of mission-critical Java applications in multi-tenant cloud environments. Based on our experiences with Linkedin's platforms, we identify and solve a set of problems caused by multi-tenancy. We share the lessons and knowledge we learned during the course.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"407 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116526460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automated Selection and Configuration of Cloud Environments Using Software Product Lines Principles 使用软件产品线原理自动选择和配置云环境
Pub Date : 2014-06-27 DOI: 10.1109/CLOUD.2014.29
Clément Quinton, Daniel Romero, L. Duchien
Deploying an application to a cloud environment has recently become very trendy, since it offers many advantages such as improving reliability or scalability. These cloud environments provide a wide range of resources at different levels of functionality, which must be appropriately configured by stakeholders for the application to run properly. Handling this variability during the configuration and deployment stages is a complex and error-prone process, usually made in an ad hoc manner in existing solutions. In this paper, we propose a software product lines based approach to face these issues. Combined with a domain model used to select among cloud environments a suitable one, our approach supports stakeholders while configuring the selected cloud environment in a consistent way, and automates the deployment of such configurations through the generation of executable deployment scripts. To evaluate the soundness of the proposed approach, we conduct an experiment involving 10 participants with different levels of experience in cloud configuration and deployment. The experiment shows that using our approach significantly reduces time and most importantly, provides a reliable way to find a correct and suitable cloud configuration. Moreover, our empirical evaluation shows that our approach is effective and scalable to properly deal with a significant number of cloud environments.
将应用程序部署到云环境最近变得非常流行,因为它提供了许多优点,例如提高可靠性或可伸缩性。这些云环境在不同的功能级别上提供了广泛的资源,涉众必须对这些资源进行适当配置,才能使应用程序正常运行。在配置和部署阶段处理这种可变性是一个复杂且容易出错的过程,通常在现有解决方案中以特别的方式进行。在本文中,我们提出了一种基于软件产品线的方法来面对这些问题。结合用于在云环境中选择合适的域模型,我们的方法在以一致的方式配置所选云环境的同时支持涉众,并通过生成可执行的部署脚本来自动部署这些配置。为了评估所提出方法的合理性,我们进行了一项涉及10名参与者的实验,这些参与者在云配置和部署方面具有不同水平的经验。实验表明,使用我们的方法大大减少了时间,最重要的是,提供了一种可靠的方法来找到正确和合适的云配置。此外,我们的经验评估表明,我们的方法是有效的和可扩展的,可以适当地处理大量的云环境。
{"title":"Automated Selection and Configuration of Cloud Environments Using Software Product Lines Principles","authors":"Clément Quinton, Daniel Romero, L. Duchien","doi":"10.1109/CLOUD.2014.29","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.29","url":null,"abstract":"Deploying an application to a cloud environment has recently become very trendy, since it offers many advantages such as improving reliability or scalability. These cloud environments provide a wide range of resources at different levels of functionality, which must be appropriately configured by stakeholders for the application to run properly. Handling this variability during the configuration and deployment stages is a complex and error-prone process, usually made in an ad hoc manner in existing solutions. In this paper, we propose a software product lines based approach to face these issues. Combined with a domain model used to select among cloud environments a suitable one, our approach supports stakeholders while configuring the selected cloud environment in a consistent way, and automates the deployment of such configurations through the generation of executable deployment scripts. To evaluate the soundness of the proposed approach, we conduct an experiment involving 10 participants with different levels of experience in cloud configuration and deployment. The experiment shows that using our approach significantly reduces time and most importantly, provides a reliable way to find a correct and suitable cloud configuration. Moreover, our empirical evaluation shows that our approach is effective and scalable to properly deal with a significant number of cloud environments.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121636019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
期刊
2014 IEEE 7th International Conference on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1