首页 > 最新文献

2015 IEEE International Conference on Cloud Engineering最新文献

英文 中文
Cloud Desktop Workload: A Characterization Study 云桌面工作负载:特性研究
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.25
E. Casalicchio, Stefano Iannucci, L. Silvestri
Today the cloud-desktop service, or Desktop-as-a-Service (DaaS), is massively replacing Virtual Desktop Infrastructures (VDI), as confirmed by the importance of players entering the DaaS market. In this paper we study the workload of a DaaS provider, analyzing three months of real traffic and resource usage. What emerges from the study, the first on the subject at the best of our knowledge, is that the workload on CPU and disk usage are long-tail distributed (lognormal, weibull and pare to) and that the length of working sessions is exponentially distributed. These results are extremely important for: the selection of the appropriate performance model to be used in capacity planning or run-time resource provisioning, the setup of workload generators, and the definition of heuristic policies for resource provisioning. The paper provides an accurate distribution fitting for all the workload features considered and discusses the implications of results on performance analysis.
今天,云桌面服务或桌面即服务(DaaS)正在大规模地取代虚拟桌面基础设施(VDI),这一点已被进入DaaS市场的参与者的重要性所证实。在本文中,我们研究了DaaS提供商的工作负载,分析了三个月的真实流量和资源使用情况。据我们所知,这项研究是关于这个主题的第一个研究,从中得出的结论是,CPU工作负载和磁盘使用情况是长尾分布的(对数正态分布、威布尔分布和帕尔分布),工作会话的长度是指数分布的。这些结果对于选择适当的性能模型以用于容量规划或运行时资源供应、设置工作负载生成器以及定义资源供应的启发式策略非常重要。本文为所考虑的所有工作负载特征提供了准确的分布拟合,并讨论了结果对性能分析的影响。
{"title":"Cloud Desktop Workload: A Characterization Study","authors":"E. Casalicchio, Stefano Iannucci, L. Silvestri","doi":"10.1109/IC2E.2015.25","DOIUrl":"https://doi.org/10.1109/IC2E.2015.25","url":null,"abstract":"Today the cloud-desktop service, or Desktop-as-a-Service (DaaS), is massively replacing Virtual Desktop Infrastructures (VDI), as confirmed by the importance of players entering the DaaS market. In this paper we study the workload of a DaaS provider, analyzing three months of real traffic and resource usage. What emerges from the study, the first on the subject at the best of our knowledge, is that the workload on CPU and disk usage are long-tail distributed (lognormal, weibull and pare to) and that the length of working sessions is exponentially distributed. These results are extremely important for: the selection of the appropriate performance model to be used in capacity planning or run-time resource provisioning, the setup of workload generators, and the definition of heuristic policies for resource provisioning. The paper provides an accurate distribution fitting for all the workload features considered and discusses the implications of results on performance analysis.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130852867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Towards a Formalised Representation for the Technical Enforcement of Privacy Level Agreements 隐私级别协议技术执行的形式化表述
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.72
Michela D'Errico, Siani Pearson
Privacy Level Agreements (PLAs) are likely to be increasingly adopted as a standardized way for cloud providers to describe their data protection practices. In this paper we propose an ontology-based model to represent the information disclosed in the agreement to turn it into a means that allows software tools to use and further process that information for different purposes, including automated service offering discovery and comparison. A specific usage of the PLA ontology is presented, showing how to link high level policies to operational policies that are then enforced and monitored. Through this established link, cloud users gain greater assurance that what is expressed in such agreements is actually being met, and thereby can take this information into account when choosing cloud service providers. Furthermore, the created link can be used to enable policy enforcement tools to add semantics to the evidence they produce; this mainly takes the form of logs that are associated with the specific policy of which execution they provide evidence. Furthermore, the use of the ontology model allows a means of enabling interoperability among tools that are in charge of the enforcement and monitoring of possible violations to the terms of the agreement.
隐私级别协议(PLAs)可能会越来越多地被云提供商采用,作为描述其数据保护实践的标准化方式。在本文中,我们提出了一个基于本体的模型来表示协议中披露的信息,将其转化为一种手段,允许软件工具出于不同目的使用和进一步处理这些信息,包括自动服务提供发现和比较。介绍了PLA本体的具体用法,展示了如何将高级策略链接到随后执行和监视的操作策略。通过这种已建立的联系,云用户获得了更大的保证,即这些协议中所表达的内容确实得到了满足,因此在选择云服务提供商时可以考虑到这一信息。此外,创建的链接可用于使策略执行工具能够为它们生成的证据添加语义;这主要采用日志的形式,这些日志与它们提供证据的执行的特定策略相关联。此外,本体模型的使用允许在负责执行和监视可能违反协议条款的工具之间实现互操作性。
{"title":"Towards a Formalised Representation for the Technical Enforcement of Privacy Level Agreements","authors":"Michela D'Errico, Siani Pearson","doi":"10.1109/IC2E.2015.72","DOIUrl":"https://doi.org/10.1109/IC2E.2015.72","url":null,"abstract":"Privacy Level Agreements (PLAs) are likely to be increasingly adopted as a standardized way for cloud providers to describe their data protection practices. In this paper we propose an ontology-based model to represent the information disclosed in the agreement to turn it into a means that allows software tools to use and further process that information for different purposes, including automated service offering discovery and comparison. A specific usage of the PLA ontology is presented, showing how to link high level policies to operational policies that are then enforced and monitored. Through this established link, cloud users gain greater assurance that what is expressed in such agreements is actually being met, and thereby can take this information into account when choosing cloud service providers. Furthermore, the created link can be used to enable policy enforcement tools to add semantics to the evidence they produce; this mainly takes the form of logs that are associated with the specific policy of which execution they provide evidence. Furthermore, the use of the ontology model allows a means of enabling interoperability among tools that are in charge of the enforcement and monitoring of possible violations to the terms of the agreement.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128661236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Towards Optimizing Wide-Area Streaming Analytics 面向优化广域流分析
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.53
Benjamin Heintz, A. Chandra, R. Sitaraman
Modern analytics services require the analysis of large quantities of data derived from disparate geo-distributed sources. Further, the analytics requirements can be complex, with many applications requiring a combination of both real-time and historical analysis, resulting in complex tradeoffs between cost, performance, and information quality. While the traditional approach to analytics processing is to send all the data to a dedicated centralized location, an alternative approach would be to push all computing to the edge for in-situ processing. We argue that neither approach is optimal for modern analytics requirements. Instead, we examine complex tradeoffs driven by a large number of factors such as application, data, and resource characteristics. We present an empirical study using Planet Lab experiments with beacon data from Akamai's download analytics service. We explore key tradeoffs and their implications for the design of next-generation scalable wide-area analytics.
现代分析服务需要分析来自不同地理分布源的大量数据。此外,分析需求可能很复杂,许多应用程序需要实时和历史分析的组合,从而导致成本、性能和信息质量之间的复杂权衡。传统的分析处理方法是将所有数据发送到专用的集中位置,而另一种方法是将所有计算推到边缘进行原位处理。我们认为这两种方法都不是现代分析需求的最佳选择。相反,我们将检查由大量因素(如应用程序、数据和资源特征)驱动的复杂权衡。我们使用Planet Lab实验和来自Akamai下载分析服务的信标数据进行了实证研究。我们探讨了关键的权衡及其对下一代可扩展广域分析设计的影响。
{"title":"Towards Optimizing Wide-Area Streaming Analytics","authors":"Benjamin Heintz, A. Chandra, R. Sitaraman","doi":"10.1109/IC2E.2015.53","DOIUrl":"https://doi.org/10.1109/IC2E.2015.53","url":null,"abstract":"Modern analytics services require the analysis of large quantities of data derived from disparate geo-distributed sources. Further, the analytics requirements can be complex, with many applications requiring a combination of both real-time and historical analysis, resulting in complex tradeoffs between cost, performance, and information quality. While the traditional approach to analytics processing is to send all the data to a dedicated centralized location, an alternative approach would be to push all computing to the edge for in-situ processing. We argue that neither approach is optimal for modern analytics requirements. Instead, we examine complex tradeoffs driven by a large number of factors such as application, data, and resource characteristics. We present an empirical study using Planet Lab experiments with beacon data from Akamai's download analytics service. We explore key tradeoffs and their implications for the design of next-generation scalable wide-area analytics.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"404 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115920404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi-cloud Distribution of Virtual Functions and Dynamic Service Deployment: Open ADN Perspective 虚拟功能多云分布与动态业务部署:开放ADN视角
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.49
D. Bhamare, R. Jain, M. Samaka, Gabor Vaszkun, A. Erbad
Network Function Virtualization (NFV) and Service Chaining (SC) are novel service deployment approaches in the contemporary cloud environments for increased flexibility and cost efficiency to the Application Service Providers and Network Providers. However, NFV and SC are still new and evolving topics. Optimized placement of these virtual functions is necessary for acceptable latency to the end-users. In this work we consider the problem of optimal Virtual Function (VF) placement in a multi-cloud environment to satisfy the client demands so that the total response time is minimized. In addition we consider the problem of dynamic service deployment for OpenADN, a novel multi-cloud application delivery platform.
网络功能虚拟化(NFV)和服务链(SC)是当代云环境中新颖的服务部署方法,可为应用程序服务提供商和网络提供商增加灵活性和成本效率。然而,NFV和SC仍然是新的和不断发展的主题。优化这些虚拟函数的位置对于最终用户来说是可以接受的延迟。在这项工作中,我们考虑了在多云环境中最优虚拟功能(VF)放置的问题,以满足客户端的需求,从而使总响应时间最小化。此外,我们还考虑了OpenADN的动态服务部署问题,OpenADN是一种新型的多云应用交付平台。
{"title":"Multi-cloud Distribution of Virtual Functions and Dynamic Service Deployment: Open ADN Perspective","authors":"D. Bhamare, R. Jain, M. Samaka, Gabor Vaszkun, A. Erbad","doi":"10.1109/IC2E.2015.49","DOIUrl":"https://doi.org/10.1109/IC2E.2015.49","url":null,"abstract":"Network Function Virtualization (NFV) and Service Chaining (SC) are novel service deployment approaches in the contemporary cloud environments for increased flexibility and cost efficiency to the Application Service Providers and Network Providers. However, NFV and SC are still new and evolving topics. Optimized placement of these virtual functions is necessary for acceptable latency to the end-users. In this work we consider the problem of optimal Virtual Function (VF) placement in a multi-cloud environment to satisfy the client demands so that the total response time is minimized. In addition we consider the problem of dynamic service deployment for OpenADN, a novel multi-cloud application delivery platform.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
An Automated Parallel Approach for Rapid Deployment of Composite Application Servers 用于快速部署组合应用服务器的自动并行方法
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.16
Yasuharu Katsuno, Hitomi Takahashi
Infrastructure as a Service (IaaS) generally provides a standard vanilla server that contains an OS and basic functions, and each user has to manually install the required applications for the proper server deployments. We are working on a composite application deployment approach to automatically install selected applications in a flexible manner, based on a set of application installation scripts that are invoked on the vanilla server. Some applications have installation dependencies involving multiple servers. Previous research projects on installing applications with multi-server dependencies have deployed the servers sequentially. This means the total deployment time grows linearly with the number of servers. Our automated parallel approach makes the composite application deployment run in parallel when there are installation dependencies across multiple servers. We implemented a prototype system on Chef, a widely used automatic server installation framework, and evaluated the performance of our composite application deployment on a Soft Layer public cloud using two composite application server cases. The deployment times were reduced by roughly 40% in our trials.
基础设施即服务(IaaS)通常提供一个包含操作系统和基本功能的标准服务器,每个用户必须手动安装所需的应用程序以进行适当的服务器部署。我们正在研究一种组合应用程序部署方法,以一种灵活的方式自动安装选定的应用程序,该方法基于在vanilla服务器上调用的一组应用程序安装脚本。一些应用程序的安装依赖关系涉及多个服务器。以前关于安装具有多服务器依赖关系的应用程序的研究项目是按顺序部署服务器的。这意味着总部署时间与服务器数量呈线性增长。当多个服务器之间存在安装依赖关系时,我们的自动化并行方法使组合应用程序部署并行运行。我们在广泛使用的自动服务器安装框架Chef上实现了一个原型系统,并使用两个复合应用服务器案例在Soft Layer公共云上评估了我们的复合应用部署的性能。在我们的试验中,部署时间减少了大约40%。
{"title":"An Automated Parallel Approach for Rapid Deployment of Composite Application Servers","authors":"Yasuharu Katsuno, Hitomi Takahashi","doi":"10.1109/IC2E.2015.16","DOIUrl":"https://doi.org/10.1109/IC2E.2015.16","url":null,"abstract":"Infrastructure as a Service (IaaS) generally provides a standard vanilla server that contains an OS and basic functions, and each user has to manually install the required applications for the proper server deployments. We are working on a composite application deployment approach to automatically install selected applications in a flexible manner, based on a set of application installation scripts that are invoked on the vanilla server. Some applications have installation dependencies involving multiple servers. Previous research projects on installing applications with multi-server dependencies have deployed the servers sequentially. This means the total deployment time grows linearly with the number of servers. Our automated parallel approach makes the composite application deployment run in parallel when there are installation dependencies across multiple servers. We implemented a prototype system on Chef, a widely used automatic server installation framework, and evaluated the performance of our composite application deployment on a Soft Layer public cloud using two composite application server cases. The deployment times were reduced by roughly 40% in our trials.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127590172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-agent Based Intelligence Generation from Very Large Datasets 基于多智能体的超大型数据集智能生成
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.96
Karima Qayumi
The spread of computing Clouds and block-chain technology enable a trend towards highly decentralized and distributed management of potentially very large datasets. Existing big data-mining systems are not designed for this nextlevel scale of distribution and decentralization. Additionally, the required system scalability is currently not given for very large datasets. We aim at solving these problems with a scalable distributed multi-agent system based approach.
计算云和区块链技术的普及使潜在的非常大的数据集的高度分散和分布式管理成为可能。现有的大数据挖掘系统并不是为这种下一级别的分布和去中心化而设计的。此外,对于非常大的数据集,目前还没有给出所需的系统可伸缩性。我们的目标是用一种可扩展的分布式多智能体系统方法来解决这些问题。
{"title":"Multi-agent Based Intelligence Generation from Very Large Datasets","authors":"Karima Qayumi","doi":"10.1109/IC2E.2015.96","DOIUrl":"https://doi.org/10.1109/IC2E.2015.96","url":null,"abstract":"The spread of computing Clouds and block-chain technology enable a trend towards highly decentralized and distributed management of potentially very large datasets. Existing big data-mining systems are not designed for this nextlevel scale of distribution and decentralization. Additionally, the required system scalability is currently not given for very large datasets. We aim at solving these problems with a scalable distributed multi-agent system based approach.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134326534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An Introduction to Cloud Benchmarking 云基准测试简介
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.65
David Bermbach
Over the last few years, more and more Cloud Computing offerings have emerged ranging from compute, data storage, and middleware services over platform environments up to ready-to-use applications. Choosing the best offering for a particular use case, is a complex task which involves comparison and trade-off analysis of functional and non-functional service properties; for non-functional quality of service (QoS) properties, this is typically done via benchmarking. Today, a plethora of benchmarking solutions exist for different layers in the cloud stack (IaaS, PaaS, SaaS) which typically address a single QoS dimension - a holistic cloud benchmark even for a single layer in the cloud stack is still missing. In this tutorial, we will give an overview of existing cloud benchmarking solutions and point-out ways in which these different benchmarks could be used in concert to actually compare clouds as a whole (i.e., for instance Amazon cloud vs. Google cloud) instead of analyzing isolated QoS dimensions of single cloud services.
在过去几年中,出现了越来越多的云计算产品,从平台环境上的计算、数据存储和中间件服务到即用型应用程序。为特定用例选择最佳产品是一项复杂的任务,它涉及功能性和非功能性服务属性的比较和权衡分析;对于非功能服务质量(QoS)属性,这通常通过基准测试来完成。如今,针对云堆栈中的不同层(IaaS、PaaS、SaaS)存在着大量的基准测试解决方案,这些解决方案通常针对单个QoS维度——即使是针对云堆栈中的单个层的整体云基准测试仍然缺失。在本教程中,我们将概述现有的云基准测试解决方案,并指出这些不同的基准测试可以协同使用的方式,以实际比较整个云(例如,Amazon云与Google云),而不是分析单个云服务的孤立QoS维度。
{"title":"An Introduction to Cloud Benchmarking","authors":"David Bermbach","doi":"10.1109/IC2E.2015.65","DOIUrl":"https://doi.org/10.1109/IC2E.2015.65","url":null,"abstract":"Over the last few years, more and more Cloud Computing offerings have emerged ranging from compute, data storage, and middleware services over platform environments up to ready-to-use applications. Choosing the best offering for a particular use case, is a complex task which involves comparison and trade-off analysis of functional and non-functional service properties; for non-functional quality of service (QoS) properties, this is typically done via benchmarking. Today, a plethora of benchmarking solutions exist for different layers in the cloud stack (IaaS, PaaS, SaaS) which typically address a single QoS dimension - a holistic cloud benchmark even for a single layer in the cloud stack is still missing. In this tutorial, we will give an overview of existing cloud benchmarking solutions and point-out ways in which these different benchmarks could be used in concert to actually compare clouds as a whole (i.e., for instance Amazon cloud vs. Google cloud) instead of analyzing isolated QoS dimensions of single cloud services.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124555472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Leveraging Linux Containers to Achieve High Availability for Cloud Services 利用Linux容器实现云服务的高可用性
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.17
Wubin Li, A. Kanso, Abdelouahed Gherbi
In this work, we present a novel approach that leverages Linux containers to achieve High Availability (HA) for cloud applications. A middleware that is comprised of a set of HA agents is defined to compensate the limitations of Linux containers in achieving HA. In our approach we start modeling at the application level, considering the dependencies among application components. We generate the proper scheduling scheme and then deploy the application across containers in the cloud. For each container that hosts critical component(s), we continuously monitor its status and checkpoint its full state, and then react to its failure by restarting locally or failing over to another host where we resume the computing from the most recent state. By using this strategy, all components hosted in a container are preserved without intrusively imposing modification on the application side. Finally, the feasibility of our approach is verified by building a proof-of-concept prototype and a case study of a video streaming application.
在这项工作中,我们提出了一种利用Linux容器实现云应用程序高可用性(HA)的新方法。定义了由一组HA代理组成的中间件,以弥补Linux容器在实现HA方面的局限性。在我们的方法中,我们从应用程序级别开始建模,考虑应用程序组件之间的依赖关系。我们生成适当的调度方案,然后跨云中的容器部署应用程序。对于承载关键组件的每个容器,我们持续监视其状态并检查点其完整状态,然后通过本地重新启动或故障转移到另一个主机(从最近的状态恢复计算)来对其故障做出反应。通过使用此策略,可以保留驻留在容器中的所有组件,而无需对应用程序端进行侵入性的修改。最后,通过构建概念验证原型和视频流应用的案例研究验证了我们方法的可行性。
{"title":"Leveraging Linux Containers to Achieve High Availability for Cloud Services","authors":"Wubin Li, A. Kanso, Abdelouahed Gherbi","doi":"10.1109/IC2E.2015.17","DOIUrl":"https://doi.org/10.1109/IC2E.2015.17","url":null,"abstract":"In this work, we present a novel approach that leverages Linux containers to achieve High Availability (HA) for cloud applications. A middleware that is comprised of a set of HA agents is defined to compensate the limitations of Linux containers in achieving HA. In our approach we start modeling at the application level, considering the dependencies among application components. We generate the proper scheduling scheme and then deploy the application across containers in the cloud. For each container that hosts critical component(s), we continuously monitor its status and checkpoint its full state, and then react to its failure by restarting locally or failing over to another host where we resume the computing from the most recent state. By using this strategy, all components hosted in a container are preserved without intrusively imposing modification on the application side. Finally, the feasibility of our approach is verified by building a proof-of-concept prototype and a case study of a video streaming application.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127734445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Stratus ML: A Layered Cloud Modeling Framework 分层云建模框架
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.42
Mohammad Hamdaqa, L. Tahvildari
The main quest for cloud stakeholders is to find an optimal deployment architecture for cloud applications that maximizes availability, minimizes cost, and addresses portability and scalability. Unfortunately, the lack of a unified definition and adequate modeling language and methodologies that address the cloud domain specific characteristics makes architecting efficient cloud applications a daunting task. This paper introduces Stratus ML: a technology agnostic integrated modeling framework for cloud applications. Stratus ML provides an intuitive user interface that allows the cloud stakeholders (i.e., providers, developers, administrators, and financial decision makers) to define their application services, configure them, specify the applications' behaviour at runtime through a set of adaptation rules, and estimate cost under diverse cloud platforms and configurations. Moreover, through a set of model transformation templates, Stratus ML maintains consistency between the various artifacts of cloud applications. This paper presents Stratus ML and illustrates its usefulness and practical applicability from different stakeholder perspectives. A demo video, usage scenario and other relevant information can be found at the Stratus ML webpage.
云涉众的主要任务是为云应用程序找到一个最佳部署架构,使可用性最大化、成本最小化,并解决可移植性和可伸缩性问题。不幸的是,由于缺乏统一的定义和适当的建模语言和方法来处理云领域的特定特征,使得构建高效的云应用程序成为一项艰巨的任务。本文介绍了Stratus ML:一个与技术无关的云应用集成建模框架。Stratus ML提供了一个直观的用户界面,允许云利益相关者(即提供商、开发人员、管理员和财务决策者)定义他们的应用程序服务,配置它们,通过一组适应规则指定应用程序在运行时的行为,并在不同的云平台和配置下估计成本。此外,通过一组模型转换模板,Stratus ML维护了云应用程序的各种工件之间的一致性。本文介绍了Stratus ML,并从不同利益相关者的角度说明了它的实用性和实用性。演示视频、使用场景和其他相关信息可以在Stratus ML网页上找到。
{"title":"Stratus ML: A Layered Cloud Modeling Framework","authors":"Mohammad Hamdaqa, L. Tahvildari","doi":"10.1109/IC2E.2015.42","DOIUrl":"https://doi.org/10.1109/IC2E.2015.42","url":null,"abstract":"The main quest for cloud stakeholders is to find an optimal deployment architecture for cloud applications that maximizes availability, minimizes cost, and addresses portability and scalability. Unfortunately, the lack of a unified definition and adequate modeling language and methodologies that address the cloud domain specific characteristics makes architecting efficient cloud applications a daunting task. This paper introduces Stratus ML: a technology agnostic integrated modeling framework for cloud applications. Stratus ML provides an intuitive user interface that allows the cloud stakeholders (i.e., providers, developers, administrators, and financial decision makers) to define their application services, configure them, specify the applications' behaviour at runtime through a set of adaptation rules, and estimate cost under diverse cloud platforms and configurations. Moreover, through a set of model transformation templates, Stratus ML maintains consistency between the various artifacts of cloud applications. This paper presents Stratus ML and illustrates its usefulness and practical applicability from different stakeholder perspectives. A demo video, usage scenario and other relevant information can be found at the Stratus ML webpage.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"375 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117083790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
In-memory computing for scalable data analytics 用于可扩展数据分析的内存计算
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.59
Jun Yu Li
Current data analytics software stacks are tailored to use large number of commodity machines in clusters, with each machine containing a small amount of memory. Thus, significant effort is made in these stacks to partition the data into small chunks, and process these chunks in parallel. Recent advances in memory technology now promise the availability of machines with the amount of memory increased by two or more orders of magnitude. For example, The Machine [1] currently under development at HP Labs plans to use memristor, a new type of non-volatile random access memory with much larger memory density at access speed comparable to today's dynamic random access memory. Such technologies offer the possibility of a flat memory/storage hierarchy, in-memory data processing and instant persistence of intermediate and final processing results. Photonic fabrics provide large communication bandwidth to move large volume of data between processing units at very low latency. Moreover, the multicore architectures adopt system-on-chip (SoC) designs to achieve significant compute performance with high power-efficiency.
当前的数据分析软件堆栈是为使用集群中的大量商用机器而定制的,每台机器都包含少量内存。因此,在这些堆栈中需要花费大量精力将数据划分为小块,并并行处理这些块。内存技术的最新进展现在保证了内存数量增加两个或更多数量级的机器的可用性。例如,目前正在HP实验室开发的The Machine[1]计划使用忆阻器,这是一种新型的非易失性随机存取存储器,具有比当今动态随机存取存储器更大的存储器密度和访问速度。这些技术提供了平面内存/存储层次结构、内存中数据处理以及中间和最终处理结果的即时持久性的可能性。光子结构提供了大的通信带宽,以非常低的延迟在处理单元之间移动大量数据。此外,多核架构采用片上系统(SoC)设计,以获得显著的计算性能和高功耗效率。
{"title":"In-memory computing for scalable data analytics","authors":"Jun Yu Li","doi":"10.1109/IC2E.2015.59","DOIUrl":"https://doi.org/10.1109/IC2E.2015.59","url":null,"abstract":"Current data analytics software stacks are tailored to use large number of commodity machines in clusters, with each machine containing a small amount of memory. Thus, significant effort is made in these stacks to partition the data into small chunks, and process these chunks in parallel. Recent advances in memory technology now promise the availability of machines with the amount of memory increased by two or more orders of magnitude. For example, The Machine [1] currently under development at HP Labs plans to use memristor, a new type of non-volatile random access memory with much larger memory density at access speed comparable to today's dynamic random access memory. Such technologies offer the possibility of a flat memory/storage hierarchy, in-memory data processing and instant persistence of intermediate and final processing results. Photonic fabrics provide large communication bandwidth to move large volume of data between processing units at very low latency. Moreover, the multicore architectures adopt system-on-chip (SoC) designs to achieve significant compute performance with high power-efficiency.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114180801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2015 IEEE International Conference on Cloud Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1