首页 > 最新文献

2013 IEEE 5th International Conference on Cloud Computing Technology and Science最新文献

英文 中文
Supporting Cloud Accountability by Collecting Evidence Using Audit Agents 通过使用审计代理收集证据来支持云问责制
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.32
T. Rübsamen, C. Reich
Today's cloud services process data and let it often unclear to customers, how and by whom data is collected, stored and processed. This hinders the adoption of cloud computing by businesses. One way to address this problem is to make clouds more accountable, which has to be provable by third parties through audits. In this paper we present a cloud-adopted evidence collection process, possible evidence sources and discuss privacy issues in the context of audits. We introduce an agent based architecture, which is able to perform audit processing and reporting continuously. Agents can be specialized to perform specific audit tasks (e.g., log data analysis) whenever necessary, to reduce complexity and the amount of collected evidence information. Finally, a multi-provider scenario is discussed, which shows the usefulness of this approach.
今天的云服务处理数据,但客户往往不清楚数据是如何以及由谁收集、存储和处理的。这阻碍了企业采用云计算。解决这个问题的一种方法是使云更加可靠,这必须由第三方通过审计来证明。在本文中,我们介绍了采用云的证据收集过程,可能的证据来源,并讨论了审计背景下的隐私问题。我们引入了一个基于代理的体系结构,它能够连续地执行审计处理和报告。代理可以在必要时专门执行特定的审计任务(例如,日志数据分析),以减少复杂性和收集的证据信息的数量。最后,讨论了一个多提供者场景,它显示了这种方法的有用性。
{"title":"Supporting Cloud Accountability by Collecting Evidence Using Audit Agents","authors":"T. Rübsamen, C. Reich","doi":"10.1109/CloudCom.2013.32","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.32","url":null,"abstract":"Today's cloud services process data and let it often unclear to customers, how and by whom data is collected, stored and processed. This hinders the adoption of cloud computing by businesses. One way to address this problem is to make clouds more accountable, which has to be provable by third parties through audits. In this paper we present a cloud-adopted evidence collection process, possible evidence sources and discuss privacy issues in the context of audits. We introduce an agent based architecture, which is able to perform audit processing and reporting continuously. Agents can be specialized to perform specific audit tasks (e.g., log data analysis) whenever necessary, to reduce complexity and the amount of collected evidence information. Finally, a multi-provider scenario is discussed, which shows the usefulness of this approach.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115703089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
A Dynamic Complex Event Processing Architecture for Cloud Monitoring and Analysis 一种用于云监测和分析的动态复杂事件处理体系结构
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.146
Afef Mdhaffar, Riadh Ben Halima, M. Jmaiel, Bernd Freisleben
Cloud monitoring and analysis are challenging tasks that have recently been addressed by Complex Event Processing (CEP) techniques. CEP systems can process many incoming event streams and execute continuously running queries to analyze the behavior of a Cloud. Based on a Cloud performance monitoring and analysis use case, this paper experimentally evaluates different CEP architectures in terms of precision, recall and other performance indicators. The results of the experimental comparison are used to propose a novel dynamic CEP architecture for Cloud monitoring and analysis. The novel dynamic CEP architecture is designed to dynamically switch between different centralized and distributed CEP architectures depending on the current machine load and network traffic conditions in the observed Cloud environment.
云监控和分析是最近由复杂事件处理(CEP)技术解决的具有挑战性的任务。CEP系统可以处理许多传入的事件流,并执行连续运行的查询来分析云的行为。基于一个云性能监控和分析用例,本文在精度、召回率和其他性能指标方面对不同的CEP架构进行了实验评估。利用实验对比的结果,提出了一种新的用于云监测和分析的动态CEP架构。这种新颖的动态CEP架构可以根据观察到的云环境中当前的机器负载和网络流量情况,在不同的集中式和分布式CEP架构之间动态切换。
{"title":"A Dynamic Complex Event Processing Architecture for Cloud Monitoring and Analysis","authors":"Afef Mdhaffar, Riadh Ben Halima, M. Jmaiel, Bernd Freisleben","doi":"10.1109/CloudCom.2013.146","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.146","url":null,"abstract":"Cloud monitoring and analysis are challenging tasks that have recently been addressed by Complex Event Processing (CEP) techniques. CEP systems can process many incoming event streams and execute continuously running queries to analyze the behavior of a Cloud. Based on a Cloud performance monitoring and analysis use case, this paper experimentally evaluates different CEP architectures in terms of precision, recall and other performance indicators. The results of the experimental comparison are used to propose a novel dynamic CEP architecture for Cloud monitoring and analysis. The novel dynamic CEP architecture is designed to dynamically switch between different centralized and distributed CEP architectures depending on the current machine load and network traffic conditions in the observed Cloud environment.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116771824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Intelligent MapReduce Based Framework for Labeling Instances in Evolving Data Stream 基于智能MapReduce的演化数据流实例标记框架
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.152
Ahsanul Haque, Brandon Parker, L. Khan, B. Thuraisingham
In our current work, we have proposed a multi-tiered ensemble based robust method to address all of the challenges of labeling instances in evolving data stream. Bottleneck of our current work is, it needs to build ADABOOST ensembles for each of the numeric features. This can face scalability issue as number of features can be very large at times in data stream. In this paper, we propose an intelligent approach to build these large number of ADABOOST ensembles with MapReduce based parallelism. We show that, this approach can help our base method to achieve significant scalability without compromising classification accuracy. We analyze different aspects of our design to depict advantages and disadvantages of the approach. We also compare and analyze performance of the proposed approach in terms of execution time, speedup and scale up.
在我们目前的工作中,我们提出了一种基于多层集成的鲁棒方法来解决在不断发展的数据流中标记实例的所有挑战。我们目前工作的瓶颈是,它需要为每个数值特征构建ADABOOST集成。这可能会面临可伸缩性问题,因为数据流中的特性数量有时会非常大。在本文中,我们提出了一种基于MapReduce并行性的智能方法来构建这些大量的ADABOOST集成。我们表明,这种方法可以帮助我们的基本方法在不影响分类精度的情况下实现显著的可扩展性。我们分析了设计的不同方面,以描述该方法的优点和缺点。我们还比较和分析了所提出的方法在执行时间、加速和扩展方面的性能。
{"title":"Intelligent MapReduce Based Framework for Labeling Instances in Evolving Data Stream","authors":"Ahsanul Haque, Brandon Parker, L. Khan, B. Thuraisingham","doi":"10.1109/CloudCom.2013.152","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.152","url":null,"abstract":"In our current work, we have proposed a multi-tiered ensemble based robust method to address all of the challenges of labeling instances in evolving data stream. Bottleneck of our current work is, it needs to build ADABOOST ensembles for each of the numeric features. This can face scalability issue as number of features can be very large at times in data stream. In this paper, we propose an intelligent approach to build these large number of ADABOOST ensembles with MapReduce based parallelism. We show that, this approach can help our base method to achieve significant scalability without compromising classification accuracy. We analyze different aspects of our design to depict advantages and disadvantages of the approach. We also compare and analyze performance of the proposed approach in terms of execution time, speedup and scale up.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125808670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Architectural Model for Deploying Critical Infrastructure Services in the Cloud 在云中部署关键基础设施服务的体系结构模型
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.67
M. Schöller, R. Bless, Frank Pallas, Jens Horneber, Paul Smith
The Cloud Computing operational model is a major recent trend in the IT industry, which has gained tremendous momentum. This trend will likely also reach the IT services that support Critical Infrastructures (CI), because of the potential cost savings and benefits of increased resilience due to elastic cloud behaviour. However, realizing CI services in the cloud introduces security and resilience requirements that existing offerings do not address well. For example, due to the opacity of cloud environments, the risks of deploying cloud-based CI services are difficult to assess, especially at the technical level, but also from legal or business perspectives. This paper discusses challenges and objectives related to bringing CI services into cloud environments, and presents an architectural model as a basis for the development of technical solutions with respect to those challenges.
云计算运营模式是IT行业最近的一个主要趋势,它已经获得了巨大的动力。这一趋势很可能也会影响到支持关键基础设施(CI)的IT服务,因为弹性云行为可以节省潜在的成本和提高弹性的好处。然而,在云中实现CI服务引入了现有产品无法很好地解决的安全性和弹性需求。例如,由于云环境的不透明性,部署基于云的CI服务的风险很难评估,尤其是在技术层面,而且从法律或业务角度来看也是如此。本文讨论了与将CI服务引入云环境相关的挑战和目标,并提出了一个体系结构模型,作为针对这些挑战开发技术解决方案的基础。
{"title":"An Architectural Model for Deploying Critical Infrastructure Services in the Cloud","authors":"M. Schöller, R. Bless, Frank Pallas, Jens Horneber, Paul Smith","doi":"10.1109/CloudCom.2013.67","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.67","url":null,"abstract":"The Cloud Computing operational model is a major recent trend in the IT industry, which has gained tremendous momentum. This trend will likely also reach the IT services that support Critical Infrastructures (CI), because of the potential cost savings and benefits of increased resilience due to elastic cloud behaviour. However, realizing CI services in the cloud introduces security and resilience requirements that existing offerings do not address well. For example, due to the opacity of cloud environments, the risks of deploying cloud-based CI services are difficult to assess, especially at the technical level, but also from legal or business perspectives. This paper discusses challenges and objectives related to bringing CI services into cloud environments, and presents an architectural model as a basis for the development of technical solutions with respect to those challenges.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122577125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Cooperative Scheduling Anti-load Balancing Algorithm for Cloud: CSAAC 云协同调度反负载均衡算法:CSAAC
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.63
Cheikhou Thiam, Georges Da Costa, J. Pierson
In the past decade, more and more attention focuses on job scheduling strategies in a variety of scenarios. Due to the characteristics of clouds, meta-scheduling turns out to be an important scheduling pattern because it is responsible for orchestrating resources managed by independent local schedulers and bridges the gap between participating nodes. Likewise, to overcome issues such as bottleneck, overloading, under loading and impractical unique administrative management, which are normally led by conventional centralized or hierarchical schemes, the distributed scheduling scheme is emerging as a promising approach because of its capability with regards to scalability and flexibility. In this paper, we introduce a decentralized dynamic scheduling approach entitled Cooperative scheduling Anti-load balancing Algorithm for cloud (CSAAC). To validate CSAAC we used a simulator which extends the MaGateSim simulator and provides better support to energy aware scheduling algorithms. CSAAC goal is to achieve optimized scheduling performance and energy gain over the scope of overall cloud, instead of individual participating nodes. The extensive experimental evaluation with a real workload dataset shows that, when compared to the centralized scheduling scheme with Best Fit as the meta-scheduling policy, the use of CSAAC can lead to a 30%61% energy gain, and a 20%30% shorter average job execution time in a decentralized scheduling manner without requiring detailed real-time processing information from participating nodes.
在过去的十年中,各种场景下的作业调度策略越来越受到人们的关注。由于云的特点,元调度成为一种重要的调度模式,因为它负责编排由独立的本地调度程序管理的资源,并在参与节点之间架起桥梁。同样,为了克服瓶颈、过载、负载不足和不切实际的独特管理等问题(这些问题通常由传统的集中式或分层式方案引起),分布式调度方案由于其在可伸缩性和灵活性方面的能力而成为一种很有前途的方法。本文提出了一种分散的动态调度方法——云协同调度反负载平衡算法(CSAAC)。为了验证CSAAC,我们使用了一个模拟器,该模拟器扩展了MaGateSim模拟器,并为能量感知调度算法提供了更好的支持。CSAAC的目标是在整个云范围内实现优化的调度性能和能量增益,而不是单个参与节点。基于真实工作负载数据集的大量实验评估表明,与以Best Fit为元调度策略的集中式调度方案相比,使用CSAAC可以在不需要参与节点详细的实时处理信息的情况下,以分散调度方式获得30% - 61%的能量增益,并缩短20% - 30%的平均作业执行时间。
{"title":"Cooperative Scheduling Anti-load Balancing Algorithm for Cloud: CSAAC","authors":"Cheikhou Thiam, Georges Da Costa, J. Pierson","doi":"10.1109/CloudCom.2013.63","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.63","url":null,"abstract":"In the past decade, more and more attention focuses on job scheduling strategies in a variety of scenarios. Due to the characteristics of clouds, meta-scheduling turns out to be an important scheduling pattern because it is responsible for orchestrating resources managed by independent local schedulers and bridges the gap between participating nodes. Likewise, to overcome issues such as bottleneck, overloading, under loading and impractical unique administrative management, which are normally led by conventional centralized or hierarchical schemes, the distributed scheduling scheme is emerging as a promising approach because of its capability with regards to scalability and flexibility. In this paper, we introduce a decentralized dynamic scheduling approach entitled Cooperative scheduling Anti-load balancing Algorithm for cloud (CSAAC). To validate CSAAC we used a simulator which extends the MaGateSim simulator and provides better support to energy aware scheduling algorithms. CSAAC goal is to achieve optimized scheduling performance and energy gain over the scope of overall cloud, instead of individual participating nodes. The extensive experimental evaluation with a real workload dataset shows that, when compared to the centralized scheduling scheme with Best Fit as the meta-scheduling policy, the use of CSAAC can lead to a 30%61% energy gain, and a 20%30% shorter average job execution time in a decentralized scheduling manner without requiring detailed real-time processing information from participating nodes.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123000179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Non-tunneling Edge-Overlay Model Using OpenFlow for Cloud Datacenter Networks 基于OpenFlow的云数据中心网络非隧道边缘覆盖模型
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.122
Ryota Kawashima, H. Matsuo
In current SDN paradigm, an edge-overlay (distributed tunneling) model using L2-in-L3 tunneling protocols, such as VXLAN, has attracted attentions for multi-tenant data center networks. The edge-overlay model can establish rapid-deployment of virtual networks onto existing traditional network facilities, ensure flexible IP/MAC address allocation to VMs, and extend the number of virtual networks regardless of the VLAN ID limitation. However, such model has performance and incompatibility problems on the traditional network environment. For L2 data center networks, this paper proposes a pure software approach that uses Open Flow virtual switches to realize yet another edge-overlay without IP tunneling. Our model leverages a header rewriting method as well as a host-based VLAN ID usage to ensure address space isolation and scalability of the number of virtual networks. In our model, any special hardware equipments like Open Flow hardware switch are not required and only software-based virtual switches and the controller are used. In this paper, we evaluate the performance of the proposed model comparing with the tunneling model using GRE or VXLAN protocol. Our model showed better performance and less CPU usage. In addition, qualitative evaluations of the model are also conducted from a broader perspective.
在当前的SDN模式中,使用L2-in-L3隧道协议(如VXLAN)的边缘覆盖(分布式隧道)模型已成为多租户数据中心网络关注的焦点。边缘叠加模型可以在现有的传统网络设施上快速部署虚拟网络,保证虚拟机IP/MAC地址的灵活分配,不受VLAN ID的限制,扩展虚拟网络的数量。然而,这种模型在传统的网络环境下存在性能和不兼容性问题。对于L2数据中心网络,本文提出了一种纯软件方法,该方法使用Open Flow虚拟交换机来实现另一种没有IP隧道的边缘覆盖。我们的模型利用报头重写方法以及基于主机的VLAN ID使用来确保地址空间隔离和虚拟网络数量的可扩展性。在我们的模型中,不需要任何特殊的硬件设备,如Open Flow硬件交换机,只使用基于软件的虚拟交换机和控制器。在本文中,我们将所提出的模型与使用GRE或VXLAN协议的隧道模型进行了性能比较。我们的模型显示出更好的性能和更少的CPU使用。此外,还从更广阔的视角对模型进行了定性评价。
{"title":"Non-tunneling Edge-Overlay Model Using OpenFlow for Cloud Datacenter Networks","authors":"Ryota Kawashima, H. Matsuo","doi":"10.1109/CloudCom.2013.122","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.122","url":null,"abstract":"In current SDN paradigm, an edge-overlay (distributed tunneling) model using L2-in-L3 tunneling protocols, such as VXLAN, has attracted attentions for multi-tenant data center networks. The edge-overlay model can establish rapid-deployment of virtual networks onto existing traditional network facilities, ensure flexible IP/MAC address allocation to VMs, and extend the number of virtual networks regardless of the VLAN ID limitation. However, such model has performance and incompatibility problems on the traditional network environment. For L2 data center networks, this paper proposes a pure software approach that uses Open Flow virtual switches to realize yet another edge-overlay without IP tunneling. Our model leverages a header rewriting method as well as a host-based VLAN ID usage to ensure address space isolation and scalability of the number of virtual networks. In our model, any special hardware equipments like Open Flow hardware switch are not required and only software-based virtual switches and the controller are used. In this paper, we evaluate the performance of the proposed model comparing with the tunneling model using GRE or VXLAN protocol. Our model showed better performance and less CPU usage. In addition, qualitative evaluations of the model are also conducted from a broader perspective.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Dynamic Exception Handling for Partitioned Workflow on Federated Clouds 联邦云上分区工作流的动态异常处理
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.34
Z. Wen, P. Watson
The aim of federated cloud computing is to allow applications to utilise a set of clouds in order to provide a better combination of properties, such as cost, security, performance and dependability, than can be achieved on a single cloud. In this paper we focus on security and dependability: introducing a new automatic method for dynamically partitioning applications across the set of clouds in an environment in which clouds can fail during workflow execution. The method deals with exceptions that occur when clouds fail, and selects the best way to repartition the workflow, whilst still meeting security requirements. This avoids the need for developers to have to code ad-hoc solutions to address cloud failure, or the alternative of simply accepting that an application will fail when a cloud fails. This paper's method builds on earlier work [1] on partitioning workflows over federated clouds to minimise cost while meeting security requirements. It extends it by pre-generating the graph of all possible ways to partition the workflow, and adding weights to the paths through the graph so that when a cloud fails, it is possible to quickly determine the cheapest possible way to make progress from that point to the completion of the workflow execution (if any path exists). The method has been implemented and evaluated through a tool which exploits e-Science Central: a portable, high-level cloud platform. The workflow application is created and distributed across a set of e-Science Central instances. By monitoring the state of each executing e-Science Central instance, the system handles exceptions as they occur at run-time. The paper describes the method and an evaluation that utilises a set of examples.
联邦云计算的目标是允许应用程序利用一组云,以便提供比在单个云上更好的属性组合,例如成本、安全性、性能和可靠性。在本文中,我们将重点关注安全性和可靠性:引入一种新的自动方法,在一个云可能在工作流执行期间发生故障的环境中,跨云集动态划分应用程序。该方法处理云失败时发生的异常,并选择重新划分工作流的最佳方法,同时仍然满足安全需求。这避免了开发人员必须编写专门的解决方案来解决云故障,或者简单地接受当云故障时应用程序将失败的替代方案。本文的方法建立在早期工作[1]的基础上,该工作是在联邦云上划分工作流,以在满足安全需求的同时最小化成本。它通过预先生成划分工作流的所有可能方法的图来扩展它,并通过图向路径添加权重,以便当云出现故障时,可以快速确定从该点到完成工作流执行的最便宜的可能方法(如果存在任何路径)。该方法已通过利用e-Science Central的一个工具实施和评估:一个可移植的高级云平台。工作流应用程序被创建并分布在一组e-Science Central实例中。通过监视每个正在执行的e-Science Central实例的状态,系统可以在运行时发生异常时进行处理。本文介绍了该方法,并利用一组实例进行了评价。
{"title":"Dynamic Exception Handling for Partitioned Workflow on Federated Clouds","authors":"Z. Wen, P. Watson","doi":"10.1109/CloudCom.2013.34","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.34","url":null,"abstract":"The aim of federated cloud computing is to allow applications to utilise a set of clouds in order to provide a better combination of properties, such as cost, security, performance and dependability, than can be achieved on a single cloud. In this paper we focus on security and dependability: introducing a new automatic method for dynamically partitioning applications across the set of clouds in an environment in which clouds can fail during workflow execution. The method deals with exceptions that occur when clouds fail, and selects the best way to repartition the workflow, whilst still meeting security requirements. This avoids the need for developers to have to code ad-hoc solutions to address cloud failure, or the alternative of simply accepting that an application will fail when a cloud fails. This paper's method builds on earlier work [1] on partitioning workflows over federated clouds to minimise cost while meeting security requirements. It extends it by pre-generating the graph of all possible ways to partition the workflow, and adding weights to the paths through the graph so that when a cloud fails, it is possible to quickly determine the cheapest possible way to make progress from that point to the completion of the workflow execution (if any path exists). The method has been implemented and evaluated through a tool which exploits e-Science Central: a portable, high-level cloud platform. The workflow application is created and distributed across a set of e-Science Central instances. By monitoring the state of each executing e-Science Central instance, the system handles exceptions as they occur at run-time. The paper describes the method and an evaluation that utilises a set of examples.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128651030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Content Espresso: A System for Large File Sharing Using Globally Dispersed Storage 内容浓缩:一个使用全局分散存储的大文件共享系统
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.162
Daisuke Ando, Masahiko Kitamura, F. Teraoka, K. Kaneko
Sharing files over the world with higher access throughput and with lower storage cost is a growing demand for the applications that use large files. However, the existing file sharing systems do not realize these two conflicted requests and no research has been found. This paper clarifies the requirements of a global large file sharing system and defines the design goal consisting of three users perspectives: fast retrieving, user defined file availability, and owner-based file management, and one system operators perspective: flexibility in bytes placement. Content Espresso satisfies this goal by approaching with four techniques: three sections model, distributed chunk storage, forward error correction, and UDP retrieving. Content Espresso delivers large files to a client utilizing as much bandwidth as the client access link even servers are located far away from the client.
以更高的访问吞吐量和更低的存储成本在全球范围内共享文件是使用大文件的应用程序日益增长的需求。然而,现有的文件共享系统并没有实现这两种冲突的请求,也没有相关的研究。本文阐明了全局大型文件共享系统的需求,并定义了设计目标,包括三个用户视角:快速检索、用户定义文件可用性和基于所有者的文件管理,以及一个系统操作员视角:字节放置的灵活性。Content Espresso通过四种技术实现了这一目标:三段模型、分布式块存储、前向纠错和UDP检索。内容浓缩咖啡提供大文件到客户端利用尽可能多的带宽的客户端访问链路,甚至服务器位于远离客户端。
{"title":"Content Espresso: A System for Large File Sharing Using Globally Dispersed Storage","authors":"Daisuke Ando, Masahiko Kitamura, F. Teraoka, K. Kaneko","doi":"10.1109/CloudCom.2013.162","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.162","url":null,"abstract":"Sharing files over the world with higher access throughput and with lower storage cost is a growing demand for the applications that use large files. However, the existing file sharing systems do not realize these two conflicted requests and no research has been found. This paper clarifies the requirements of a global large file sharing system and defines the design goal consisting of three users perspectives: fast retrieving, user defined file availability, and owner-based file management, and one system operators perspective: flexibility in bytes placement. Content Espresso satisfies this goal by approaching with four techniques: three sections model, distributed chunk storage, forward error correction, and UDP retrieving. Content Espresso delivers large files to a client utilizing as much bandwidth as the client access link even servers are located far away from the client.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129081256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using Iterative MapReduce for Parallel Virtual Screening 基于迭代MapReduce的并行虚拟筛选
Pub Date : 2013-12-02 DOI: 10.1109/CLOUDCOM.2013.99
Laeeq Ahmed, Åke Edlund, E. Laure, O. Spjuth
Virtual Screening is a technique in chemo informatics used for Drug discovery by searching large libraries of molecule structures. Virtual Screening often uses SVM, a supervised machine learning technique used for regression and classification analysis. Virtual screening using SVM not only involves huge datasets, but it is also compute expensive with a complexity that can grow at least up to O(n2). SVM based applications most commonly use MPI, which becomes complex and impractical with large datasets. As an alternative to MPI, MapReduce, and its different implementations, have been successfully used on commodity clusters for analysis of data for problems with very large datasets. Due to the large libraries of molecule structures in virtual screening, it becomes a good candidate for MapReduce. In this paper we present a MapReduce implementation of SVM based virtual screening, using Spark, an iterative MapReduce programming model. We show that our implementation has a good scaling behaviour and opens up the possibility of using huge public cloud infrastructures efficiently for virtual screening.
虚拟筛选是化学信息学中一种通过搜索大型分子结构文库来发现药物的技术。虚拟筛选通常使用SVM,这是一种用于回归和分类分析的监督机器学习技术。使用支持向量机进行虚拟筛选不仅涉及庞大的数据集,而且计算成本高,复杂度至少可以增长到O(n2)。基于支持向量机的应用最常使用MPI,但对于大型数据集,MPI变得复杂且不切实际。作为MPI的替代方案,MapReduce及其不同的实现已经成功地用于商品集群,用于分析具有非常大数据集的问题的数据。由于虚拟筛选的分子结构库很大,因此它成为MapReduce的一个很好的候选者。在本文中,我们提出了一个基于SVM的虚拟筛选的MapReduce实现,使用Spark,一个迭代MapReduce编程模型。我们证明了我们的实现具有良好的扩展行为,并为有效地使用大型公共云基础设施进行虚拟筛选开辟了可能性。
{"title":"Using Iterative MapReduce for Parallel Virtual Screening","authors":"Laeeq Ahmed, Åke Edlund, E. Laure, O. Spjuth","doi":"10.1109/CLOUDCOM.2013.99","DOIUrl":"https://doi.org/10.1109/CLOUDCOM.2013.99","url":null,"abstract":"Virtual Screening is a technique in chemo informatics used for Drug discovery by searching large libraries of molecule structures. Virtual Screening often uses SVM, a supervised machine learning technique used for regression and classification analysis. Virtual screening using SVM not only involves huge datasets, but it is also compute expensive with a complexity that can grow at least up to O(n2). SVM based applications most commonly use MPI, which becomes complex and impractical with large datasets. As an alternative to MPI, MapReduce, and its different implementations, have been successfully used on commodity clusters for analysis of data for problems with very large datasets. Due to the large libraries of molecule structures in virtual screening, it becomes a good candidate for MapReduce. In this paper we present a MapReduce implementation of SVM based virtual screening, using Spark, an iterative MapReduce programming model. We show that our implementation has a good scaling behaviour and opens up the possibility of using huge public cloud infrastructures efficiently for virtual screening.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129487633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Cost-Aware Client-Side File Caching for Data-Intensive Applications 数据密集型应用程序的成本意识客户端文件缓存
Pub Date : 2013-12-02 DOI: 10.1109/CloudCom.2013.140
Yaning Huang, Hai Jin, Xuanhua Shi, Song Wu, Yong Chen
Parallel and distributed file systems are widely used to provide high throughput in high-performance computing and Cloud computing systems. To increase the parallelism, I/O requests are partitioned into multiple sub-requests (or `flows') and distributed across different data nodes. The performance of file systems is extremely poor if data nodes have highly unbalanced response time. Client-side caching offers a promising direction for addressing this issue. However, current work has primarily used client-side memory as a read cache and employed a write-through policy which requires synchronous update for every write and significantly under-utilizes the client-side cache when the applications are write-intensive. Realizing that the cost of an I/O request depends on the struggler sub-requests, we propose a cost-aware client-side file caching (CCFC) strategy, that is designed to cache the sub-requests with high I/O cost on the client end. This caching policy enables a new trade-off across write performance, consistency guarantee and cache size dimensions. Using benchmark workloads MADbench2, we evaluate our new cache policy alongside conventional write-through. We find that the proposed CCFC strategy can achieve up to 110% throughput improvement compared to the conventional write-through policies with the same cache size on an 85-node cluster.
并行和分布式文件系统被广泛应用于高性能计算和云计算系统中,以提供高吞吐量。为了增加并行性,I/O请求被划分为多个子请求(或“流”),并分布在不同的数据节点上。如果数据节点的响应时间高度不平衡,则文件系统的性能会非常差。客户端缓存为解决这个问题提供了一个有希望的方向。然而,目前的工作主要是使用客户端内存作为读缓存,并采用write-through策略,该策略要求每次写入都同步更新,并且在应用程序是写密集型的情况下,客户端缓存的利用率明显不足。意识到I/O请求的成本取决于挣扎子请求,我们提出了一种成本感知的客户端文件缓存(CCFC)策略,该策略旨在缓存客户端具有高I/O成本的子请求。此缓存策略支持在写性能、一致性保证和缓存大小维度之间进行新的权衡。使用基准工作负载MADbench2,我们评估了新的缓存策略和传统的透写策略。我们发现,在85节点集群上,与具有相同缓存大小的传统write-through策略相比,所提出的CCFC策略可以实现高达110%的吞吐量改进。
{"title":"Cost-Aware Client-Side File Caching for Data-Intensive Applications","authors":"Yaning Huang, Hai Jin, Xuanhua Shi, Song Wu, Yong Chen","doi":"10.1109/CloudCom.2013.140","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.140","url":null,"abstract":"Parallel and distributed file systems are widely used to provide high throughput in high-performance computing and Cloud computing systems. To increase the parallelism, I/O requests are partitioned into multiple sub-requests (or `flows') and distributed across different data nodes. The performance of file systems is extremely poor if data nodes have highly unbalanced response time. Client-side caching offers a promising direction for addressing this issue. However, current work has primarily used client-side memory as a read cache and employed a write-through policy which requires synchronous update for every write and significantly under-utilizes the client-side cache when the applications are write-intensive. Realizing that the cost of an I/O request depends on the struggler sub-requests, we propose a cost-aware client-side file caching (CCFC) strategy, that is designed to cache the sub-requests with high I/O cost on the client end. This caching policy enables a new trade-off across write performance, consistency guarantee and cache size dimensions. Using benchmark workloads MADbench2, we evaluate our new cache policy alongside conventional write-through. We find that the proposed CCFC strategy can achieve up to 110% throughput improvement compared to the conventional write-through policies with the same cache size on an 85-node cluster.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124211002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2013 IEEE 5th International Conference on Cloud Computing Technology and Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1