首页 > 最新文献

Middleware for Grid Computing最新文献

英文 中文
Estimating resource costs of data-intensive workloads in public clouds 估算公共云中数据密集型工作负载的资源成本
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405139
Rizwan Mian, Patrick Martin, F. Zulkernine, J. L. Vázquez-Poletti
The promise of "infinite" resources given by the cloud computing paradigm has led to recent interest in exploiting clouds for large-scale data-intensive computing. In this paper, we present a model to estimate the resource costs for executing data-intensive workloads in a public cloud. The cost model quantifies the cost-effectiveness of a resource configuration for a given workload with consumer performance requirements expressed as SLAs, and is a key component of a larger framework for resource provisioning in clouds. We instantiate the cost model for the Amazon cloud, and experimentally evaluate the impact of key factors on the accuracy of the model.
云计算范式所提供的“无限”资源的承诺,最近引起了人们对利用云进行大规模数据密集型计算的兴趣。在本文中,我们提出了一个模型来估计在公共云中执行数据密集型工作负载的资源成本。成本模型量化给定工作负载的资源配置的成本效益,并将消费者性能需求表示为sla,并且是用于云中资源供应的更大框架的关键组件。我们实例化了亚马逊云的成本模型,并实验评估了关键因素对模型准确性的影响。
{"title":"Estimating resource costs of data-intensive workloads in public clouds","authors":"Rizwan Mian, Patrick Martin, F. Zulkernine, J. L. Vázquez-Poletti","doi":"10.1145/2405136.2405139","DOIUrl":"https://doi.org/10.1145/2405136.2405139","url":null,"abstract":"The promise of \"infinite\" resources given by the cloud computing paradigm has led to recent interest in exploiting clouds for large-scale data-intensive computing. In this paper, we present a model to estimate the resource costs for executing data-intensive workloads in a public cloud. The cost model quantifies the cost-effectiveness of a resource configuration for a given workload with consumer performance requirements expressed as SLAs, and is a key component of a larger framework for resource provisioning in clouds. We instantiate the cost model for the Amazon cloud, and experimentally evaluate the impact of key factors on the accuracy of the model.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126430417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
An economic model for green cloud 绿色云的经济模型
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405141
Tridib Mukherjee, K. Dasgupta, Sujit Gujar, Gueyoung Jung, Haengju Lee
A novel economic model for cloud-based services is presented that: (i) transparently presents energy demands (of services) to the customers in a simple abstract form, called green point, which is understandable to any general user; (ii) provides economic incentives (through dynamic discounts) as motivations for customers to select greener configuration; and (iii) offers service prices to customers such that the profit of cloud vendor is maximized while providing the discounts. Price is differentiated for different classes of customers (e.g. gold, silver, and bronze) and dynamic based on posterior distribution on resource demand considering both current demand and willingness toward green configuration. The model enables a paradigm shift in cloud service offering that provides higher transparency and control knobs to users for greener configuration. Preliminary results indicate higher profit using the proposed model compared to static pricing in existing pay-per-use service offerings.
提出了一种新的基于云的服务经济模型:(i)以一种简单的抽象形式透明地向客户展示(服务)的能源需求,称为绿点,任何普通用户都能理解;(ii)提供经济诱因(透过动态折扣),鼓励顾客选择更环保的配置;(iii)向客户提供服务价格,在提供折扣的同时使云供应商的利润最大化。价格对不同类别的客户(如黄金、白银和青铜)进行区分,并根据资源需求的后验分布动态考虑当前需求和对绿色配置的意愿。该模型实现了云服务产品的范式转变,为用户提供更高的透明度和控制旋钮,以实现更环保的配置。初步结果表明,与现有的按次付费服务产品的静态定价相比,使用拟议模型的利润更高。
{"title":"An economic model for green cloud","authors":"Tridib Mukherjee, K. Dasgupta, Sujit Gujar, Gueyoung Jung, Haengju Lee","doi":"10.1145/2405136.2405141","DOIUrl":"https://doi.org/10.1145/2405136.2405141","url":null,"abstract":"A novel economic model for cloud-based services is presented that: (i) transparently presents energy demands (of services) to the customers in a simple abstract form, called green point, which is understandable to any general user; (ii) provides economic incentives (through dynamic discounts) as motivations for customers to select greener configuration; and (iii) offers service prices to customers such that the profit of cloud vendor is maximized while providing the discounts. Price is differentiated for different classes of customers (e.g. gold, silver, and bronze) and dynamic based on posterior distribution on resource demand considering both current demand and willingness toward green configuration. The model enables a paradigm shift in cloud service offering that provides higher transparency and control knobs to users for greener configuration. Preliminary results indicate higher profit using the proposed model compared to static pricing in existing pay-per-use service offerings.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126524299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
From CPU to GP-GPU: challenges and insights in GPU-based environmental simulations 从CPU到GP-GPU:基于gpu的环境模拟的挑战和见解
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405142
Jools Chadwick, François Taïani, J. Beecham
From economics to natural sciences, many disciplines use complex models and simulations to better understand the world, but the unknown parameters of these models can be difficult to find. Looking to optimise the search for such parameters, many turn to the high parallelism afforded by general purpose Graphical Processing Unit (GP-GPU) programming. This paper discusses the challenges faced and lessons learned when porting such a marine ecology simulation from a pure-CPU implementation to make use of GPU technology. While this is a specific implementation, many of the problems we encountered apply generally to GPU-based simulations. They therefore hint at the potential for reusable solutions to GPU-based environmental simulations, and pave the way for a generic GPU-middleware for natural sciences.
从经济学到自然科学,许多学科都使用复杂的模型和模拟来更好地理解世界,但这些模型的未知参数很难找到。为了优化对这些参数的搜索,许多人转向通用图形处理单元(GP-GPU)编程提供的高并行性。本文讨论了将这种海洋生态模拟从纯cpu实现移植到使用GPU技术时所面临的挑战和吸取的教训。虽然这是一个特定的实现,但我们遇到的许多问题通常适用于基于gpu的模拟。因此,它们暗示了基于gpu的环境模拟的可重用解决方案的潜力,并为自然科学的通用gpu中间件铺平了道路。
{"title":"From CPU to GP-GPU: challenges and insights in GPU-based environmental simulations","authors":"Jools Chadwick, François Taïani, J. Beecham","doi":"10.1145/2405136.2405142","DOIUrl":"https://doi.org/10.1145/2405136.2405142","url":null,"abstract":"From economics to natural sciences, many disciplines use complex models and simulations to better understand the world, but the unknown parameters of these models can be difficult to find. Looking to optimise the search for such parameters, many turn to the high parallelism afforded by general purpose Graphical Processing Unit (GP-GPU) programming. This paper discusses the challenges faced and lessons learned when porting such a marine ecology simulation from a pure-CPU implementation to make use of GPU technology. While this is a specific implementation, many of the problems we encountered apply generally to GPU-based simulations. They therefore hint at the potential for reusable solutions to GPU-based environmental simulations, and pave the way for a generic GPU-middleware for natural sciences.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132302772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimenter's portal: the collection, management and analysis of scientific data from remote sites 实验者的门户:收集、管理和分析来自远程站点的科学数据
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405143
M. Bauer, N. McIntyre, N. Sherry, J. Qin, Marina Suominen-Fuller, Y. Xie, O. Mola, D. Maxwell, D. Liu, E. Matias
This paper describes an e-Science initiative to enable teams of scientists to run experiments with secure links at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens via common browsers to operate, observe and record essential parts of an experiment and to access remote cloud-based analysis software to process the large data sets that are often involved in complex experiments. This paper describes the architecture of the software, the underlying web services used for remote access to research facilities and describes the cloud-based approach for data analysis. The core services are general and can be used as the basis for access to a variety of systems, though specific screen interfaces and analysis software must be tailored to a facility. For illustrative purposes, we focus on use of the system to access a single site - a synchrotron beamline at the Canadian Light Source. We conclude with a discussion of the generality and extensibility of the software and services.
这篇论文描述了一个e-Science计划,它使科学家团队能够在一个或多个先进的研究设施中使用安全链接进行实验。该软件通过普通浏览器为分布广泛的团队提供一组控件和屏幕,以操作、观察和记录实验的重要部分,并访问远程基于云的分析软件来处理复杂实验中经常涉及的大型数据集。本文描述了该软件的体系结构,用于远程访问研究设施的底层web服务,并描述了基于云的数据分析方法。核心服务是通用的,可以用作访问各种系统的基础,但是必须为设施量身定制特定的屏幕接口和分析软件。为了便于说明,我们将重点放在使用该系统访问单个站点-加拿大光源的同步加速器光束线。最后,我们讨论了软件和服务的通用性和可扩展性。
{"title":"Experimenter's portal: the collection, management and analysis of scientific data from remote sites","authors":"M. Bauer, N. McIntyre, N. Sherry, J. Qin, Marina Suominen-Fuller, Y. Xie, O. Mola, D. Maxwell, D. Liu, E. Matias","doi":"10.1145/2405136.2405143","DOIUrl":"https://doi.org/10.1145/2405136.2405143","url":null,"abstract":"This paper describes an e-Science initiative to enable teams of scientists to run experiments with secure links at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens via common browsers to operate, observe and record essential parts of an experiment and to access remote cloud-based analysis software to process the large data sets that are often involved in complex experiments. This paper describes the architecture of the software, the underlying web services used for remote access to research facilities and describes the cloud-based approach for data analysis. The core services are general and can be used as the basis for access to a variety of systems, though specific screen interfaces and analysis software must be tailored to a facility. For illustrative purposes, we focus on use of the system to access a single site - a synchrotron beamline at the Canadian Light Source. We conclude with a discussion of the generality and extensibility of the software and services.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132860556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using relative costs in workflow scheduling to cope with input data uncertainty 在工作流调度中利用相对成本来应对输入数据的不确定性
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405144
L. Bittencourt, R. Sakellariou, E. Madeira
Grids and clouds are utilized for the execution of applications composed of dependent tasks, usually modeled as workflows. To efficiently run the application, a scheduler must distribute the components of the workflow in the available resources using information about duration of tasks and communication between tasks in the workflow. However, such information may be subject to imprecisions, thus not reflecting what is observed during the execution. In this paper we propose a simple way of representing the costs of the components in a workflow in order to reduce the impact of uncertainties introduced by wrong estimations, and also to ease the application specification for the user. Evaluation shows that the use of relative costs in tasks and dependencies can improve in many cases the resulting schedule when compared to cases where the input data carries an uncertainty of 20% and 50%.
网格和云用于执行由相关任务组成的应用程序,这些任务通常建模为工作流。为了有效地运行应用程序,调度程序必须使用有关任务持续时间和工作流中任务之间通信的信息,在可用资源中分发工作流的组件。但是,这些信息可能不精确,因此不能反映执行过程中观察到的情况。在本文中,我们提出了一种简单的方法来表示工作流中组件的成本,以减少由错误估计引入的不确定性的影响,并简化用户的应用程序规范。评估表明,与输入数据的不确定性分别为20%和50%的情况相比,在任务和依赖项中使用相对成本可以在许多情况下改进生成的进度。
{"title":"Using relative costs in workflow scheduling to cope with input data uncertainty","authors":"L. Bittencourt, R. Sakellariou, E. Madeira","doi":"10.1145/2405136.2405144","DOIUrl":"https://doi.org/10.1145/2405136.2405144","url":null,"abstract":"Grids and clouds are utilized for the execution of applications composed of dependent tasks, usually modeled as workflows. To efficiently run the application, a scheduler must distribute the components of the workflow in the available resources using information about duration of tasks and communication between tasks in the workflow. However, such information may be subject to imprecisions, thus not reflecting what is observed during the execution. In this paper we propose a simple way of representing the costs of the components in a workflow in order to reduce the impact of uncertainties introduced by wrong estimations, and also to ease the application specification for the user. Evaluation shows that the use of relative costs in tasks and dependencies can improve in many cases the resulting schedule when compared to cases where the input data carries an uncertainty of 20% and 50%.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128671173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Replication for dependability on virtualized cloud environments 复制以提高虚拟化云环境的可靠性
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405138
Filipe Araújo, R. Barbosa, A. Casimiro
Execution of critical services traditionally requires multiple distinct replicas, supported by independent network and hardware. To operate properly, these services often depend on the correctness of a fraction of replicas, usually over 2/3 or 1/2. Defying the ideal situation, economical reasons may tempt users to replicate critical services onto a single multi-tenant cloud infrastructure. Since this may expose users to correlated failures, we assess the risks for two kinds of majorities: a conventional one, related to the number of replicas, regardless of the machines where they run; and a second one, related to the physical machines where the replicas run. This latter case may exist in multi-tenant virtualized environments only. We evaluate crash-stop and Byzantine faults that may affect virtual machines or physical machines. Contrary to what one might expect, we conclude that replicas do not need to be evenly distributed by a fixed number of physical machines. On the contrary, we found cases where they should be as unbalanced as possible. We try to systematically identify the best defense for each kind of fault and majority to conserve.
关键服务的执行通常需要多个不同的副本,由独立的网络和硬件支持。为了正常运行,这些服务通常依赖于一小部分副本的正确性,通常超过2/3或1/2。与理想情况相反,经济原因可能会诱使用户将关键服务复制到单个多租户云基础设施上。由于这可能会使用户暴露在相关故障中,我们评估了两种类型的大多数风险:一种是传统的,与副本的数量相关,而不管它们运行的机器;第二个问题与运行副本的物理机器有关。后一种情况可能只存在于多租户虚拟化环境中。我们评估可能影响虚拟机或物理机的崩溃停止和拜占庭故障。与人们可能期望的相反,我们得出的结论是,副本不需要由固定数量的物理机器均匀分布。相反,我们发现在某些情况下,它们应该尽可能地不平衡。我们试图系统地确定每种故障的最佳防御和大多数保存。
{"title":"Replication for dependability on virtualized cloud environments","authors":"Filipe Araújo, R. Barbosa, A. Casimiro","doi":"10.1145/2405136.2405138","DOIUrl":"https://doi.org/10.1145/2405136.2405138","url":null,"abstract":"Execution of critical services traditionally requires multiple distinct replicas, supported by independent network and hardware. To operate properly, these services often depend on the correctness of a fraction of replicas, usually over 2/3 or 1/2. Defying the ideal situation, economical reasons may tempt users to replicate critical services onto a single multi-tenant cloud infrastructure. Since this may expose users to correlated failures, we assess the risks for two kinds of majorities: a conventional one, related to the number of replicas, regardless of the machines where they run; and a second one, related to the physical machines where the replicas run. This latter case may exist in multi-tenant virtualized environments only. We evaluate crash-stop and Byzantine faults that may affect virtual machines or physical machines. Contrary to what one might expect, we conclude that replicas do not need to be evenly distributed by a fixed number of physical machines. On the contrary, we found cases where they should be as unbalanced as possible. We try to systematically identify the best defense for each kind of fault and majority to conserve.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114532270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
VMR: volunteer MapReduce over the large scale internet VMR:志愿者在大规模互联网上使用MapReduce
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405137
Fernando Costa, L. Veiga, P. Ferreira
Volunteer Computing systems (VC) harness computing resources of machines from around the world to perform distributed independent tasks. Existing infrastructures follow a master/worker model, with a centralized architecture, which limits the scalability of the solution given its dependence on the server. We intend to create a distributed model, in order to improve performance and reduce the burden on the server. In this paper we present VMR, a VC system able to run MapReduce applications on top of volunteer resources, over the large scale Internet. We describe VMR's architecture and evaluate its performance by executing several MapReduce applications on a wide area testbed. Our results show that VMR successfully runs MapReduce tasks over the Internet. When compared to an unmodified VC system, VMR obtains a performance increase of over 60% in application turnaround time, while reducing the bandwidth use by an order of magnitude.
志愿计算系统(VC)利用来自世界各地的计算机的计算资源来执行分布式的独立任务。现有的基础设施遵循主/工作模型,具有集中式体系结构,这限制了解决方案的可伸缩性,因为它依赖于服务器。我们打算创建一个分布式模型,以提高性能并减轻服务器的负担。在本文中,我们提出了VMR,一个能够在大规模互联网上基于志愿者资源运行MapReduce应用程序的VC系统。我们描述了VMR的架构,并通过在广域测试平台上执行几个MapReduce应用程序来评估其性能。我们的结果表明,VMR成功地在互联网上运行MapReduce任务。与未修改的VC系统相比,VMR在应用程序周转时间方面的性能提高了60%以上,同时将带宽使用降低了一个数量级。
{"title":"VMR: volunteer MapReduce over the large scale internet","authors":"Fernando Costa, L. Veiga, P. Ferreira","doi":"10.1145/2405136.2405137","DOIUrl":"https://doi.org/10.1145/2405136.2405137","url":null,"abstract":"Volunteer Computing systems (VC) harness computing resources of machines from around the world to perform distributed independent tasks. Existing infrastructures follow a master/worker model, with a centralized architecture, which limits the scalability of the solution given its dependence on the server. We intend to create a distributed model, in order to improve performance and reduce the burden on the server.\u0000 In this paper we present VMR, a VC system able to run MapReduce applications on top of volunteer resources, over the large scale Internet. We describe VMR's architecture and evaluate its performance by executing several MapReduce applications on a wide area testbed.\u0000 Our results show that VMR successfully runs MapReduce tasks over the Internet. When compared to an unmodified VC system, VMR obtains a performance increase of over 60% in application turnaround time, while reducing the bandwidth use by an order of magnitude.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An analytical approach for predicting QoS of web services choreographies 预测web服务编排的QoS的分析方法
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405140
A. Goldman, Yanik Ngoko, D. Milojicic
Given a Web Services Composition, we deal with the prediction of the mean service response time that can be expected from a user request that is serviced. This challenge is a key issue in the design of middleware, managing Web Services Composition. We focus on complex services composition that can be described as BPMN choreographies of services. Our main contribution is a mathematical programming based approach for the prediction of the response time of Web Services Compositions. This new approach occurs through the automatic generation of a linear program whose number of variables and constraints is polynomial in the number of elements used to represent the Service Composition. The equations of the linear program are based on well known aggregation rules for service composition and a new modeling that we introduced for handling communication within Web Services.
给定一个Web服务组合,我们将处理对服务的用户请求的平均服务响应时间的预测。这个挑战是中间件设计中的一个关键问题,即管理Web服务组合。我们关注的是可以被描述为BPMN服务编排的复杂服务组合。我们的主要贡献是基于数学规划的方法,用于预测Web服务组合的响应时间。这种新方法通过自动生成线性程序来实现,线性程序的变量和约束的数量是用于表示服务组合的元素数量的多项式。线性规划的方程基于众所周知的服务组合聚合规则和我们为处理Web服务内的通信而引入的新建模。
{"title":"An analytical approach for predicting QoS of web services choreographies","authors":"A. Goldman, Yanik Ngoko, D. Milojicic","doi":"10.1145/2405136.2405140","DOIUrl":"https://doi.org/10.1145/2405136.2405140","url":null,"abstract":"Given a Web Services Composition, we deal with the prediction of the mean service response time that can be expected from a user request that is serviced. This challenge is a key issue in the design of middleware, managing Web Services Composition. We focus on complex services composition that can be described as BPMN choreographies of services. Our main contribution is a mathematical programming based approach for the prediction of the response time of Web Services Compositions. This new approach occurs through the automatic generation of a linear program whose number of variables and constraints is polynomial in the number of elements used to represent the Service Composition. The equations of the linear program are based on well known aggregation rules for service composition and a new modeling that we introduced for handling communication within Web Services.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121177237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Towards an SPL-based monitoring middleware strategy for cloud computing applications 面向云计算应用程序的基于sql的监控中间件策略
Pub Date : 2012-12-03 DOI: 10.1145/2405136.2405145
A. Almeida, Everton Cavalcante, T. Batista, Frederico Lopes, Flávia Coimbra Delicato, Paulo F. Pires, Gustavo Alves, N. Cacho
Cloud-based applications are composed of services offered by distinct third-party cloud providers. The selection of the proper cloud services that fit the application needs is based on cloud-related information, i.e. properties of the services such as price, availability, response time, among others. Typically, applications rely on a middleware that abstracts away the burden of direct dealing with underlying mechanisms for service selection and communication with the cloud providers. In this context, in a previous work we already discussed the benefits of using the software product lines (SPL) paradigm for representing alternative cloud services and their properties, which is suitable for the process of choosing the proper services to compose the application. As most cloud-related information are dynamic and may change any time during the application execution, the continuous monitoring of such information is essential to ensure that the deployed application is composed of cloud services that adhere to the application requirements. In this paper we present an SPL-based monitoring middleware strategy to continuously monitoring the dynamic properties of cloud services used by an application.
基于云的应用程序由不同的第三方云提供商提供的服务组成。选择适合应用程序需求的适当云服务是基于与云相关的信息,即服务的属性,如价格、可用性、响应时间等。通常,应用程序依赖于中间件,该中间件抽象了直接处理服务选择和与云提供商通信的底层机制的负担。在此上下文中,在之前的工作中,我们已经讨论了使用软件产品线(SPL)范式来表示备选云服务及其属性的好处,SPL适用于选择合适的服务来组成应用程序的过程。由于大多数与云相关的信息都是动态的,并且可能在应用程序执行期间随时更改,因此持续监视此类信息对于确保已部署的应用程序由符合应用程序需求的云服务组成至关重要。在本文中,我们提出了一种基于sql的监控中间件策略,用于持续监控应用程序使用的云服务的动态属性。
{"title":"Towards an SPL-based monitoring middleware strategy for cloud computing applications","authors":"A. Almeida, Everton Cavalcante, T. Batista, Frederico Lopes, Flávia Coimbra Delicato, Paulo F. Pires, Gustavo Alves, N. Cacho","doi":"10.1145/2405136.2405145","DOIUrl":"https://doi.org/10.1145/2405136.2405145","url":null,"abstract":"Cloud-based applications are composed of services offered by distinct third-party cloud providers. The selection of the proper cloud services that fit the application needs is based on cloud-related information, i.e. properties of the services such as price, availability, response time, among others. Typically, applications rely on a middleware that abstracts away the burden of direct dealing with underlying mechanisms for service selection and communication with the cloud providers. In this context, in a previous work we already discussed the benefits of using the software product lines (SPL) paradigm for representing alternative cloud services and their properties, which is suitable for the process of choosing the proper services to compose the application. As most cloud-related information are dynamic and may change any time during the application execution, the continuous monitoring of such information is essential to ensure that the deployed application is composed of cloud services that adhere to the application requirements. In this paper we present an SPL-based monitoring middleware strategy to continuously monitoring the dynamic properties of cloud services used by an application.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126370075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Prediction-based auto-scaling of scientific workflows 基于预测的科学工作流自动缩放
Pub Date : 2011-12-12 DOI: 10.1145/2089002.2089003
R. Cushing, Spiros Koulouzis, A. Belloum, M. Bubak
In this paper we propose a novel method for auto-scaling data-centric workflow tasks. Scaling is achieved through a prediction mechanism where the input data load on each task within a workflow is used to compute the estimated task execution time. Through load prediction, the framework can take informed decisions on scaling multiple workflow tasks independently to improve overall throughput and reduce workflow bottlenecks. This method was implemented in the WS-VLAM workflow system and with an image analyses workflow we show that this technique achieves faster data processing rates and reduces overall workflow makespan.
本文提出了一种以数据为中心的工作流任务自动伸缩的新方法。伸缩是通过预测机制实现的,其中使用工作流中每个任务的输入数据负载来计算估计的任务执行时间。通过负载预测,该框架可以在独立扩展多个工作流任务时做出明智的决策,以提高整体吞吐量并减少工作流瓶颈。在WS-VLAM工作流系统中实现了该方法,并以一个图像分析工作流为例,表明该方法可以提高数据处理速度,缩短工作流的总完工时间。
{"title":"Prediction-based auto-scaling of scientific workflows","authors":"R. Cushing, Spiros Koulouzis, A. Belloum, M. Bubak","doi":"10.1145/2089002.2089003","DOIUrl":"https://doi.org/10.1145/2089002.2089003","url":null,"abstract":"In this paper we propose a novel method for auto-scaling data-centric workflow tasks. Scaling is achieved through a prediction mechanism where the input data load on each task within a workflow is used to compute the estimated task execution time. Through load prediction, the framework can take informed decisions on scaling multiple workflow tasks independently to improve overall throughput and reduce workflow bottlenecks. This method was implemented in the WS-VLAM workflow system and with an image analyses workflow we show that this technique achieves faster data processing rates and reduces overall workflow makespan.","PeriodicalId":313448,"journal":{"name":"Middleware for Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116513668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
Middleware for Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1