首页 > 最新文献

2015 IEEE International Conference on Cloud Engineering最新文献

英文 中文
Information Flow Control for Strong Protection with Flexible Sharing in PaaS PaaS中灵活共享的强保护信息流控制
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.64
Thomas Pasquier, Jatinder Singh, J. Bacon
The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.
跨应用程序共享数据的需求变得越来越明显。当前的云隔离机制仅关注于保护,例如在操作系统级别隔离的容器,以及通过管理程序隔离的虚拟机。然而,由于严格关注保护,这些方法不能提供可控的共享。本文介绍了信息流控制(IFC)如何提供一种灵活的替代方案。作为一种以数据为中心的机制,它在需要时支持强隔离,同时提供对共享数据的连续、细粒度控制。支持ifc的云平台将确保在所有应用程序中作为数据流执行策略,而不需要任何特殊的共享机制。
{"title":"Information Flow Control for Strong Protection with Flexible Sharing in PaaS","authors":"Thomas Pasquier, Jatinder Singh, J. Bacon","doi":"10.1109/IC2E.2015.64","DOIUrl":"https://doi.org/10.1109/IC2E.2015.64","url":null,"abstract":"The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121842725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
I/O Performance Modeling for Big Data Applications over Cloud Infrastructures 基于云基础设施的大数据应用I/O性能建模
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.29
Ioannis Mytilinis, Dimitrios Tsoumakos, Verena Kantere, Anastassios Nanos, N. Koziris
Big Data applications receive an ever-increasing amount of attention, thus becoming a dominant class of applications that are deployed over virtualized environments. Cloud environments entail a large amount of complexity relative to I/O performance. The use of Big Data increases the complexity of I/O management as well as its characterization and prediction: As I/O operations become growingly dominant in such applications, the intricacies of virtualization, different storage back ends and deployment setups significantly hinder our ability to analyze and correctly predict I/O performance. To that end, this work proposes an end-to-end modeling technique to predict performance of I/O--intensive Big Data applications running over cloud infrastructures. We develop a model tuned over application and infrastructure dimensions: Primitive I/O operations, data access patterns, storage back ends and deployment parameters. The trained model can be used to predict both I/O but also general task performance. Our evaluation results show that for jobs which are dominated by I/O operations, such as I/O-bound MapReduce jobs, our model is capable of predicting execution time with an accuracy close to 90% that decreases as application processing becomes more complex.
大数据应用受到越来越多的关注,因此成为部署在虚拟化环境上的主要应用类别。相对于I/O性能,云环境带来了大量的复杂性。大数据的使用增加了I/O管理及其特征和预测的复杂性:随着I/O操作在此类应用程序中越来越占主导地位,虚拟化、不同存储后端和部署设置的复杂性极大地阻碍了我们分析和正确预测I/O性能的能力。为此,本研究提出了一种端到端建模技术,用于预测在云基础设施上运行的I/O密集型大数据应用程序的性能。我们开发了一个针对应用程序和基础设施维度进行调整的模型:基本I/O操作、数据访问模式、存储后端和部署参数。经过训练的模型既可用于预测I/O,也可用于预测一般任务性能。我们的评估结果表明,对于I/O操作占主导地位的作业,例如I/O绑定的MapReduce作业,我们的模型能够以接近90%的准确率预测执行时间,随着应用程序处理变得更复杂而降低。
{"title":"I/O Performance Modeling for Big Data Applications over Cloud Infrastructures","authors":"Ioannis Mytilinis, Dimitrios Tsoumakos, Verena Kantere, Anastassios Nanos, N. Koziris","doi":"10.1109/IC2E.2015.29","DOIUrl":"https://doi.org/10.1109/IC2E.2015.29","url":null,"abstract":"Big Data applications receive an ever-increasing amount of attention, thus becoming a dominant class of applications that are deployed over virtualized environments. Cloud environments entail a large amount of complexity relative to I/O performance. The use of Big Data increases the complexity of I/O management as well as its characterization and prediction: As I/O operations become growingly dominant in such applications, the intricacies of virtualization, different storage back ends and deployment setups significantly hinder our ability to analyze and correctly predict I/O performance. To that end, this work proposes an end-to-end modeling technique to predict performance of I/O--intensive Big Data applications running over cloud infrastructures. We develop a model tuned over application and infrastructure dimensions: Primitive I/O operations, data access patterns, storage back ends and deployment parameters. The trained model can be used to predict both I/O but also general task performance. Our evaluation results show that for jobs which are dominated by I/O operations, such as I/O-bound MapReduce jobs, our model is capable of predicting execution time with an accuracy close to 90% that decreases as application processing becomes more complex.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132571565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
FIDDLE: Federated Infrastructure Discovery and Description Language 联邦基础设施发现和描述语言
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.77
A. Willner, R. Loughnane, T. Magedanz
Considerable efforts have been spent on designing architectures to manage heterogeneous resources across multiple administrative domains. Specific fields of application are federated cloud computing (Intercloud) approaches and distributed testbeds, among others. An important interoperability challenge that arises in this context is the exchange of information about the provided resources and their dependencies. Existing work usually rests upon schematic data models, which impede the discovery and management of heterogeneous resources between autonomous sites. One way of addressing this issue is to exchange semantic information models. In this paper, we exploit such approaches to formally define federations, including their infrastructures and the life-cycle of the offered resources and services. The requirements of this work have been derived from several research projects and the results are in process of being standardized by an international body. The main contribution of this work is a higher level (upper) ontology and initial integration concepts for it. These contributions form a basis for further work in the general context of distributed semantic resource management.
在设计跨多个管理域管理异构资源的体系结构上已经花费了大量的精力。具体的应用领域包括联合云计算(Intercloud)方法和分布式测试平台等。在此上下文中出现的一个重要的互操作性挑战是关于所提供资源及其依赖关系的信息交换。现有的工作通常依赖于示意图数据模型,这阻碍了自治站点之间异构资源的发现和管理。解决这个问题的一种方法是交换语义信息模型。在本文中,我们利用这些方法来正式定义联邦,包括其基础设施和所提供资源和服务的生命周期。这项工作的要求来自若干研究项目,其结果正在由一个国际机构加以标准化。这项工作的主要贡献是一个更高层次的本体和它的初始集成概念。这些贡献为分布式语义资源管理的一般上下文中的进一步工作奠定了基础。
{"title":"FIDDLE: Federated Infrastructure Discovery and Description Language","authors":"A. Willner, R. Loughnane, T. Magedanz","doi":"10.1109/IC2E.2015.77","DOIUrl":"https://doi.org/10.1109/IC2E.2015.77","url":null,"abstract":"Considerable efforts have been spent on designing architectures to manage heterogeneous resources across multiple administrative domains. Specific fields of application are federated cloud computing (Intercloud) approaches and distributed testbeds, among others. An important interoperability challenge that arises in this context is the exchange of information about the provided resources and their dependencies. Existing work usually rests upon schematic data models, which impede the discovery and management of heterogeneous resources between autonomous sites. One way of addressing this issue is to exchange semantic information models. In this paper, we exploit such approaches to formally define federations, including their infrastructures and the life-cycle of the offered resources and services. The requirements of this work have been derived from several research projects and the results are in process of being standardized by an international body. The main contribution of this work is a higher level (upper) ontology and initial integration concepts for it. These contributions form a basis for further work in the general context of distributed semantic resource management.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127384642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
EAGER: Deployment-Time API Governance for Modern PaaS Clouds EAGER:面向现代PaaS云的部署时API治理
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.69
Hiranya Jayathilaka, C. Krintz, R. Wolski
To track, control, and compel reuse of web APIs, we investigate a new approach to API governance -- combined policy, implementation, and deployment control of web APIs. Our approach, called EAGER, provides a software architecture that integrates into PaaS platforms to support systemwide, deployment-time enforcement of governance policies. Specifically, EAGER checks for and prevents backward incompatible API changes from being deployed into production PaaS clouds, enforces service reuse, and facilitates enforcement of other best practices in software maintenance via policies. Our experiments with an EAGER prototype show that enforcing API governance at deployment-time in PaaS clouds is efficient and scalable to thousands of APIs and policies.
为了跟踪、控制和强制web API的重用,我们研究了一种新的API治理方法——将web API的策略、实现和部署控制结合起来。我们的方法称为EAGER,它提供了一个集成到PaaS平台的软件架构,以支持系统范围的、部署时实施的治理策略。具体来说,EAGER检查并防止向后不兼容的API更改部署到生产PaaS云中,加强服务重用,并通过策略促进软件维护中的其他最佳实践的实施。我们对EAGER原型的实验表明,在PaaS云中部署时实施API治理是有效的,并且可扩展到数千个API和策略。
{"title":"EAGER: Deployment-Time API Governance for Modern PaaS Clouds","authors":"Hiranya Jayathilaka, C. Krintz, R. Wolski","doi":"10.1109/IC2E.2015.69","DOIUrl":"https://doi.org/10.1109/IC2E.2015.69","url":null,"abstract":"To track, control, and compel reuse of web APIs, we investigate a new approach to API governance -- combined policy, implementation, and deployment control of web APIs. Our approach, called EAGER, provides a software architecture that integrates into PaaS platforms to support systemwide, deployment-time enforcement of governance policies. Specifically, EAGER checks for and prevents backward incompatible API changes from being deployed into production PaaS clouds, enforces service reuse, and facilitates enforcement of other best practices in software maintenance via policies. Our experiments with an EAGER prototype show that enforcing API governance at deployment-time in PaaS clouds is efficient and scalable to thousands of APIs and policies.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116744250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Efficient Prototyping of Fault Tolerant Map-Reduce Applications with Docker-Hadoop 基于Docker-Hadoop的容错Map-Reduce应用的高效原型
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.73
J. Rey, M. Cogorno, Sergio Nesmachnow, L. Steffenel
Prototyping and testing distributed systems is considered to be a hard task because it is not always possible to reproduce a given sequence of events. While simulations may help on this task, they cannot replace test and validation with real systems. In this paper we present Docker-Hadoop, a container-based virtualization platform designed to prototype, test and deploy MapReduce applications and systems. This tool allowed us to test and reproduce fault-tolerance scenarios that are especially interesting in the context of the PER-MARE project, which aims at adapting the Hadoop framework to the case pervasive systems. Indeed, we developed a fault-tolerant component that can circumvent the limitations from original Hadoop and prevent the job scheduling stall in the case of failures or network disconnections. Thanks to Docker-Hadoop, we could easily prototype and test our improved Hadoop, with the first scalability and speedup results being presented in this paper.
对分布式系统进行原型设计和测试被认为是一项艰巨的任务,因为不可能总是重现给定的事件序列。虽然模拟可以帮助完成这项任务,但它们不能取代真实系统的测试和验证。在本文中,我们介绍了Docker-Hadoop,这是一个基于容器的虚拟化平台,旨在对MapReduce应用程序和系统进行原型化、测试和部署。这个工具允许我们测试和重现在PER-MARE项目上下文中特别有趣的容错场景,PER-MARE项目旨在使Hadoop框架适应case普适系统。实际上,我们开发了一个容错组件,它可以规避原始Hadoop的限制,并防止在故障或网络断开的情况下作业调度停滞。多亏了Docker-Hadoop,我们可以很容易地原型化和测试改进后的Hadoop,本文给出了第一个可扩展性和加速结果。
{"title":"Efficient Prototyping of Fault Tolerant Map-Reduce Applications with Docker-Hadoop","authors":"J. Rey, M. Cogorno, Sergio Nesmachnow, L. Steffenel","doi":"10.1109/IC2E.2015.73","DOIUrl":"https://doi.org/10.1109/IC2E.2015.73","url":null,"abstract":"Prototyping and testing distributed systems is considered to be a hard task because it is not always possible to reproduce a given sequence of events. While simulations may help on this task, they cannot replace test and validation with real systems. In this paper we present Docker-Hadoop, a container-based virtualization platform designed to prototype, test and deploy MapReduce applications and systems. This tool allowed us to test and reproduce fault-tolerance scenarios that are especially interesting in the context of the PER-MARE project, which aims at adapting the Hadoop framework to the case pervasive systems. Indeed, we developed a fault-tolerant component that can circumvent the limitations from original Hadoop and prevent the job scheduling stall in the case of failures or network disconnections. Thanks to Docker-Hadoop, we could easily prototype and test our improved Hadoop, with the first scalability and speedup results being presented in this paper.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131814174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A Bird's-Eye View on Modelling Malleable Multi-cloud Applications 可塑多云应用建模的鸟瞰图
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.94
Mohammad Hamdaqa
Cloud platforms advances have changed the application development landscape. Cloud platforms abstract the complexity of application delivery to enable rapid development and easy management. This changes the way development teams need to think about and deal with the underlying resources while building and managing their applications. This research describes a new methodology supported by a modeling framework to enable organizations that build cloud applications (e.g., SaaS providers) to unbiasedly exploit the cloud platform building blocks to leverage the flexibility, reliability and scalability that these platforms provide to the application layer.
云平台的进步已经改变了应用程序开发的前景。云平台抽象了应用程序交付的复杂性,以实现快速开发和易于管理。这改变了开发团队在构建和管理应用程序时需要考虑和处理底层资源的方式。本研究描述了一种由建模框架支持的新方法,使构建云应用程序的组织(例如,SaaS提供商)能够公正地利用云平台构建块来利用这些平台为应用层提供的灵活性、可靠性和可扩展性。
{"title":"A Bird's-Eye View on Modelling Malleable Multi-cloud Applications","authors":"Mohammad Hamdaqa","doi":"10.1109/IC2E.2015.94","DOIUrl":"https://doi.org/10.1109/IC2E.2015.94","url":null,"abstract":"Cloud platforms advances have changed the application development landscape. Cloud platforms abstract the complexity of application delivery to enable rapid development and easy management. This changes the way development teams need to think about and deal with the underlying resources while building and managing their applications. This research describes a new methodology supported by a modeling framework to enable organizations that build cloud applications (e.g., SaaS providers) to unbiasedly exploit the cloud platform building blocks to leverage the flexibility, reliability and scalability that these platforms provide to the application layer.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131810355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Software-Defined Flow Table Pipeline 软件定义流表管道
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.52
Xiaoye Sun, T. Ng, Guohui Wang
Software-Defined Networking (SDN) is revolutionizing data center networks for cloud computing with its ability to enable network virtualization and powerful network resource management that are crucial in any multi-tenant environment. In order to support sophisticated network control logic, the data plane of a switch should have a flexible Flow Table Pipeline (FTP). However, the FTP on state-of-the-art SDN switches is hardware-defined, which greatly limits the advantages of using FTP in cloud computing systems. This paper removes this limitation by introducing software-defined FTP (SDFTP), which provides an extremely flexible FTP as the southbound interface of the SDN control plane. SDFTP offers arbitrary number of pipeline stages and adaptive flow table sizing at runtime by building Software-Defined Flow Tables (SDFTs). Our analysis shows that SDFTP could create 138 times more adaptively sized pipeline stages than the hardware-defined data plane while maintaining comparable performance.
软件定义网络(SDN)正在彻底改变云计算的数据中心网络,它支持网络虚拟化和强大的网络资源管理,这在任何多租户环境中都是至关重要的。为了支持复杂的网络控制逻辑,交换机的数据平面应具有灵活的FTP (Flow Table Pipeline)。然而,最先进的SDN交换机上的FTP是硬件定义的,这极大地限制了在云计算系统中使用FTP的优势。本文通过引入软件定义FTP (SDFTP)来消除这一限制,它提供了一个极其灵活的FTP作为SDN控制平面的南向接口。SDFTP通过构建软件定义流表(SDFTs),在运行时提供任意数量的管道级和自适应流表大小。我们的分析表明,SDFTP可以创建比硬件定义的数据平面多138倍的自适应大小的管道阶段,同时保持相当的性能。
{"title":"Software-Defined Flow Table Pipeline","authors":"Xiaoye Sun, T. Ng, Guohui Wang","doi":"10.1109/IC2E.2015.52","DOIUrl":"https://doi.org/10.1109/IC2E.2015.52","url":null,"abstract":"Software-Defined Networking (SDN) is revolutionizing data center networks for cloud computing with its ability to enable network virtualization and powerful network resource management that are crucial in any multi-tenant environment. In order to support sophisticated network control logic, the data plane of a switch should have a flexible Flow Table Pipeline (FTP). However, the FTP on state-of-the-art SDN switches is hardware-defined, which greatly limits the advantages of using FTP in cloud computing systems. This paper removes this limitation by introducing software-defined FTP (SDFTP), which provides an extremely flexible FTP as the southbound interface of the SDN control plane. SDFTP offers arbitrary number of pipeline stages and adaptive flow table sizing at runtime by building Software-Defined Flow Tables (SDFTs). Our analysis shows that SDFTP could create 138 times more adaptively sized pipeline stages than the hardware-defined data plane while maintaining comparable performance.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"44 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129194579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Cloud Storage Infrastructure Optimization Analytics 云存储基础设施优化分析
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.83
R. Routray
Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.
考虑到云计算在敏捷性和成本效益方面给企业带来的价值主张,它的出现和采用已经变得非常普遍。使用云交付模型的大数据分析能力(特别是将存储/系统管理视为服务提供商的大数据问题)被定义为分析即服务或软件即服务。该服务简化了从运营企业数据中心获得有用的见解,从而实现了成本和性能优化。软件定义的环境将控制平面与通常垂直集成在传统网络或存储系统中的数据平面解耦。控制平面和数据平面之间的解耦为提高安全性、弹性和IT优化提供了机会。本演讲描述了我们在云上托管系统管理平台(又名控制平面)的新方法,该方法以软件即服务(SaaS)模型提供给企业。具体来说,在本次演讲中,重点是SaaS范式的分析层,使数据中心能够通过简单的捕获、分析和治理框架对基础设施进行可视化、优化和预测。它的核心是使用大数据分析从系统管理指标数据中提取可操作的见解。我们的系统是在研究中开发的,并在客户中部署,其核心重点是分析框架的敏捷性、弹性和可扩展性。我们演示了几个系统/存储管理分析案例研究,以演示云消费者和服务提供商的成本和性能优化。分析平台生成的可操作见解通过基于OpenStack的平台以自动化的方式实现。
{"title":"Cloud Storage Infrastructure Optimization Analytics","authors":"R. Routray","doi":"10.1109/IC2E.2015.83","DOIUrl":"https://doi.org/10.1109/IC2E.2015.83","url":null,"abstract":"Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125668156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automating Cloud Service Level Agreements Using Semantic Technologies 使用语义技术自动化云服务水平协议
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.63
K. Joshi, C. Pearce
Cloud related legal documents, like terms of service or customer agreement are usually managed as plain text files. Hence extensive manual effort is required to monitor the cloud service performance by cross referencing the metrics and measures agreed upon in these documents. We have significantly automated the process of managing and monitoring cloud Service Level Agreements (SLA) using semantic web technologies like OWL, RDF and SPARQL. In this paper, we describe in detail the cloud SLA ontology and the prototype that we have developed to illustrate how the SLA measures can be automatically extracted from legal Terms of Service that are available on cloud provider websites.
与云相关的法律文件,如服务条款或客户协议,通常以纯文本文件的形式进行管理。因此,通过交叉引用这些文档中商定的指标和度量来监视云服务性能需要大量的手工工作。我们使用语义web技术,如OWL、RDF和SPARQL,显著地自动化了管理和监控云服务水平协议(SLA)的过程。在本文中,我们详细描述了云SLA本体和我们开发的原型,以说明如何从云提供商网站上可用的法律服务条款中自动提取SLA度量。
{"title":"Automating Cloud Service Level Agreements Using Semantic Technologies","authors":"K. Joshi, C. Pearce","doi":"10.1109/IC2E.2015.63","DOIUrl":"https://doi.org/10.1109/IC2E.2015.63","url":null,"abstract":"Cloud related legal documents, like terms of service or customer agreement are usually managed as plain text files. Hence extensive manual effort is required to monitor the cloud service performance by cross referencing the metrics and measures agreed upon in these documents. We have significantly automated the process of managing and monitoring cloud Service Level Agreements (SLA) using semantic web technologies like OWL, RDF and SPARQL. In this paper, we describe in detail the cloud SLA ontology and the prototype that we have developed to illustrate how the SLA measures can be automatically extracted from legal Terms of Service that are available on cloud provider websites.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114233094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Harp: Collective Communication on Hadoop Harp: Hadoop上的集体通信
Pub Date : 2015-03-09 DOI: 10.1109/IC2E.2015.35
Bingjing Zhang, Yang Ruan, J. Qiu
Big data processing tools have evolved rapidly in recent years. MapReduce has proven very successful but is not optimized for many important analytics, especially those involving iteration. In this regard, Iterative MapReduce frameworks improve performance of MapReduce job chains through caching. Further, Pregel, Giraph and Graph Lab abstract data as a graph and process it in iterations. But all these tools are designed with a fixed data abstraction and have limited collective communication support to synchronize application data and algorithm control states among parallel processes. In this paper, we introduce a collective communication abstraction layer which provides efficient collective communication operations on several common data abstractions such as arrays, key-values and graphs, and define a Map Collective programming model which serves the diverse collective communication demands in different parallel algorithms. We implement a library called Harp to provide the features above and plug it into Hadoop so that applications abstracted in Map Collective model can be easily developed on top of MapReduce framework and conveniently integrated with other tools in Apache Big Data Stack. With improved expressiveness in the abstraction and excellent performance on the implementation, we can simultaneously support various applications from HPC to Cloud systems together with high performance.
近年来,大数据处理工具发展迅速。MapReduce已经被证明是非常成功的,但是对于许多重要的分析,特别是那些涉及迭代的分析,并没有进行优化。在这方面,迭代MapReduce框架通过缓存提高了MapReduce作业链的性能。此外,Pregel, Giraph和Graph Lab将数据抽象为图形并在迭代中进行处理。但是所有这些工具都是用固定的数据抽象设计的,并且在并行进程之间同步应用程序数据和算法控制状态的集体通信支持有限。在本文中,我们引入了一个集体通信抽象层,为数组、键值和图等几种常见的数据抽象提供高效的集体通信操作,并定义了一个Map collective编程模型,以满足不同并行算法中不同的集体通信需求。我们实现了一个名为Harp的库来提供上述功能,并将其插入Hadoop中,以便Map Collective模型中抽象的应用程序可以轻松地在MapReduce框架上开发,并方便地与Apache大数据堆栈中的其他工具集成。通过改进的抽象表达能力和出色的实现性能,我们可以同时支持从高性能计算到云系统的各种应用。
{"title":"Harp: Collective Communication on Hadoop","authors":"Bingjing Zhang, Yang Ruan, J. Qiu","doi":"10.1109/IC2E.2015.35","DOIUrl":"https://doi.org/10.1109/IC2E.2015.35","url":null,"abstract":"Big data processing tools have evolved rapidly in recent years. MapReduce has proven very successful but is not optimized for many important analytics, especially those involving iteration. In this regard, Iterative MapReduce frameworks improve performance of MapReduce job chains through caching. Further, Pregel, Giraph and Graph Lab abstract data as a graph and process it in iterations. But all these tools are designed with a fixed data abstraction and have limited collective communication support to synchronize application data and algorithm control states among parallel processes. In this paper, we introduce a collective communication abstraction layer which provides efficient collective communication operations on several common data abstractions such as arrays, key-values and graphs, and define a Map Collective programming model which serves the diverse collective communication demands in different parallel algorithms. We implement a library called Harp to provide the features above and plug it into Hadoop so that applications abstracted in Map Collective model can be easily developed on top of MapReduce framework and conveniently integrated with other tools in Apache Big Data Stack. With improved expressiveness in the abstraction and excellent performance on the implementation, we can simultaneously support various applications from HPC to Cloud systems together with high performance.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115739348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
期刊
2015 IEEE International Conference on Cloud Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1