首页 > 最新文献

2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing最新文献

英文 中文
mPlogP: A Parallel Computation Model for Heterogeneous Multi-core Computer 异构多核计算机的并行计算模型
Liang Li, Xingjun Zhang, Jinghua Feng, Xiaoshe Dong
Due to the heterogeneity and the multigrain parallelism of the heterogeneous multi-core computer, communication and memory access show hierarchical characteristics ignored by other models. In this paper, a new model named mPlogP, is presented on the basis of the PlogP model, in which communication and memory access is abstracted by considering these new characteristics of the heterogeneous multi-core computer. It uses memory access to model the behavior of computation, estimates the execution time of every part of applications and guides the optimization of effective parallel programs. Finally this proposed model is validated by experiments that it can precisely evaluate the execution of parallel applications under the heterogeneous multi-core computer.
由于异构多核计算机的异构性和多粒并行性,通信和存储访问表现出被其他模型所忽略的层次性特征。本文在PlogP模型的基础上提出了一种新的模型mPlogP,该模型考虑到异构多核计算机的这些新特性,对通信和内存访问进行了抽象。它使用内存访问来模拟计算行为,估计应用程序各部分的执行时间,并指导有效并行程序的优化。最后通过实验验证了该模型能够准确地评估异构多核计算机下并行应用程序的执行情况。
{"title":"mPlogP: A Parallel Computation Model for Heterogeneous Multi-core Computer","authors":"Liang Li, Xingjun Zhang, Jinghua Feng, Xiaoshe Dong","doi":"10.1109/CCGRID.2010.60","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.60","url":null,"abstract":"Due to the heterogeneity and the multigrain parallelism of the heterogeneous multi-core computer, communication and memory access show hierarchical characteristics ignored by other models. In this paper, a new model named mPlogP, is presented on the basis of the PlogP model, in which communication and memory access is abstracted by considering these new characteristics of the heterogeneous multi-core computer. It uses memory access to model the behavior of computation, estimates the execution time of every part of applications and guides the optimization of effective parallel programs. Finally this proposed model is validated by experiments that it can precisely evaluate the execution of parallel applications under the heterogeneous multi-core computer.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123952417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Planning Large Data Transfers in Institutional Grids 规划机构网格中的大数据传输
Fatiha Bouabache, T. Hérault, Sylvain Peyronnet, F. Cappello
In grid computing, many scientific and engineering applications require access to large amounts of distributed data. The size and number of these data collections has been growing rapidly in recent years. The costs of data transmission take a significant part of the global execution time. When communication streams flow concurrently on shared links, transport control protocols have issues allocating fair bandwidth to all the streams, and the network becomes sub-optimally used. One way to deal with this situation is to schedule the communications in a way that will induce an optimal use of the network. We focus on the case of large data transfers that can be completely described at the initialization time. In this case, a plan of data migration can be computed at initialization time, and then executed. However, this computation phase must take a small time when compared to the actual execution of the plan. We propose a best effort solution, to compute approximately, based on the uniform random sampling of possible schedules, a communication plan. We show the effectiveness of this approach both theoretically and by simulations.
在网格计算中,许多科学和工程应用程序需要访问大量的分布式数据。近年来,这些数据收集的规模和数量一直在迅速增长。数据传输的成本占全局执行时间的很大一部分。当通信流在共享链路上并发地流动时,传输控制协议在为所有流分配公平带宽方面存在问题,并且网络的使用不是最优的。处理这种情况的一种方法是,以一种将导致网络的最佳使用的方式调度通信。我们关注的是可以在初始化时完全描述的大数据传输的情况。在这种情况下,可以在初始化时计算数据迁移计划,然后执行。然而,与计划的实际执行相比,这个计算阶段必须花费很少的时间。我们提出了一个最大努力的解决方案,基于可能时间表的均匀随机抽样,近似计算一个通信计划。我们从理论和仿真两方面证明了这种方法的有效性。
{"title":"Planning Large Data Transfers in Institutional Grids","authors":"Fatiha Bouabache, T. Hérault, Sylvain Peyronnet, F. Cappello","doi":"10.1109/CCGRID.2010.68","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.68","url":null,"abstract":"In grid computing, many scientific and engineering applications require access to large amounts of distributed data. The size and number of these data collections has been growing rapidly in recent years. The costs of data transmission take a significant part of the global execution time. When communication streams flow concurrently on shared links, transport control protocols have issues allocating fair bandwidth to all the streams, and the network becomes sub-optimally used. One way to deal with this situation is to schedule the communications in a way that will induce an optimal use of the network. We focus on the case of large data transfers that can be completely described at the initialization time. In this case, a plan of data migration can be computed at initialization time, and then executed. However, this computation phase must take a small time when compared to the actual execution of the plan. We propose a best effort solution, to compute approximately, based on the uniform random sampling of possible schedules, a communication plan. We show the effectiveness of this approach both theoretically and by simulations.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125769830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Dynamic TTL-Based Search in Unstructured Peer-to-Peer Networks 非结构化对等网络中基于ttl的动态搜索
Imen Filali, F. Huet
Resource discovery is a challenging issue in unstructured peer-to-peer networks. Blind search approaches, including flooding and random walks, are the two typical algorithms used in such systems. Blind flooding is not scalable because of its high communication cost. On the other hand, the performance of random walks approaches largely depends on the random choice of walks. Some informed mechanisms use additional information, usually obtained from previous queries, for routing. Such approaches can reduce the traffic overhead but they limit the query coverage. Furthermore, they usually rely on complex protocols to maintain information at each peer. In this paper, we propose two schemes which can be used to improve the search performance in unstructured peer-to-peer networks. The first one is a simple caching mechanism based on resource descriptions. Peers that offer resources send periodic advertisement messages. These messages are stored into a cache and are used for routing requests. The second is a dynamic Time-To-Live (TTL) which enables messages to break their horizon. Instead of decreasing the query TTL by 1 at each hop, it is decreased by a value v such as 0
在非结构化点对点网络中,资源发现是一个具有挑战性的问题。盲搜索方法,包括洪水和随机漫步,是这类系统中使用的两种典型算法。由于通信成本高,盲泛洪不具有可扩展性。另一方面,随机漫步方法的性能很大程度上取决于漫步的随机选择。一些知情机制使用附加信息(通常从以前的查询中获得)进行路由。这种方法可以减少流量开销,但限制了查询覆盖范围。此外,它们通常依赖于复杂的协议来维护每个对等点的信息。本文提出了两种改进非结构化点对点网络搜索性能的方案。第一种是基于资源描述的简单缓存机制。提供资源的对等体定期发送通告消息。这些消息存储在缓存中,用于路由请求。第二个是动态生存时间(TTL),它使消息能够突破它们的视界。不是在每一跳将查询TTL减少1,而是减少一个值v,比如0
{"title":"Dynamic TTL-Based Search in Unstructured Peer-to-Peer Networks","authors":"Imen Filali, F. Huet","doi":"10.1109/CCGRID.2010.66","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.66","url":null,"abstract":"Resource discovery is a challenging issue in unstructured peer-to-peer networks. Blind search approaches, including flooding and random walks, are the two typical algorithms used in such systems. Blind flooding is not scalable because of its high communication cost. On the other hand, the performance of random walks approaches largely depends on the random choice of walks. Some informed mechanisms use additional information, usually obtained from previous queries, for routing. Such approaches can reduce the traffic overhead but they limit the query coverage. Furthermore, they usually rely on complex protocols to maintain information at each peer. In this paper, we propose two schemes which can be used to improve the search performance in unstructured peer-to-peer networks. The first one is a simple caching mechanism based on resource descriptions. Peers that offer resources send periodic advertisement messages. These messages are stored into a cache and are used for routing requests. The second is a dynamic Time-To-Live (TTL) which enables messages to break their horizon. Instead of decreasing the query TTL by 1 at each hop, it is decreased by a value v such as 0","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124564907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Energy Efficient Allocation of Virtual Machines in Cloud Data Centers 云数据中心中虚拟机的节能分配
A. Beloglazov, R. Buyya
Rapid growth of the demand for computational power has led to the creation of large-scale data centers. They consume enormous amounts of electrical power resulting in high operational costs and carbon dioxide emissions. Moreover, modern Cloud computing environments have to provide high Quality of Service (QoS) for their customers resulting in the necessity to deal with power-performance trade-off. We propose an efficient resource management policy for virtualized Cloud data centers. The objective is to continuously consolidate VMs leveraging live migration and switch off idle nodes to minimize power consumption, while providing required Quality of Service. We present evaluation results showing that dynamic reallocation of VMs brings substantial energy savings, thus justifying further development of the proposed policy.
对计算能力需求的快速增长导致了大规模数据中心的创建。它们消耗大量的电力,导致高运营成本和二氧化碳排放。此外,现代云计算环境必须为其客户提供高质量的服务(QoS),因此必须处理电源性能权衡。提出了一种高效的虚拟化云数据中心资源管理策略。其目标是利用实时迁移和关闭空闲节点来持续整合虚拟机,以最大限度地减少功耗,同时提供所需的服务质量。我们提出的评估结果表明,虚拟机的动态重新分配带来了大量的能源节约,从而证明了进一步发展拟议的政策是合理的。
{"title":"Energy Efficient Allocation of Virtual Machines in Cloud Data Centers","authors":"A. Beloglazov, R. Buyya","doi":"10.1109/CCGRID.2010.45","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.45","url":null,"abstract":"Rapid growth of the demand for computational power has led to the creation of large-scale data centers. They consume enormous amounts of electrical power resulting in high operational costs and carbon dioxide emissions. Moreover, modern Cloud computing environments have to provide high Quality of Service (QoS) for their customers resulting in the necessity to deal with power-performance trade-off. We propose an efficient resource management policy for virtualized Cloud data centers. The objective is to continuously consolidate VMs leveraging live migration and switch off idle nodes to minimize power consumption, while providing required Quality of Service. We present evaluation results showing that dynamic reallocation of VMs brings substantial energy savings, thus justifying further development of the proposed policy.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122327557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 492
Towards Energy Aware Scheduling for Precedence Constrained Parallel Tasks in a Cluster with DVFS 基于DVFS集群的优先级约束并行任务能量感知调度研究
Lizhe Wang, G. Laszewski, Jai Dayal, Fugang Wang
Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper.
减少高端计算的能源消耗可以带来各种好处,例如降低操作成本、提高系统可靠性和尊重环境。本文旨在利用动态电压频率缩放(DVFS)技术开发启发式调度方法,并给出在集群中降低并行任务功耗的应用经验。在本文中,提出了优先级约束的并行任务、支持DVFS的集群和能耗的形式化模型。研究非关键作业的松弛时间,在不增加任务整体执行时间的前提下,延长非关键作业的执行时间,降低能耗。此外,本文还考虑了绿色服务水平协议。通过在可承受的范围内增加任务执行时间,提出了调度启发式方法来降低任务执行的能耗,并讨论了能耗与任务执行时间之间的关系。通过仿真研究验证了模型和调度启发式。测试结果证明了本文提出的能量感知调度启发式算法的设计和实现是正确的。
{"title":"Towards Energy Aware Scheduling for Precedence Constrained Parallel Tasks in a Cluster with DVFS","authors":"Lizhe Wang, G. Laszewski, Jai Dayal, Fugang Wang","doi":"10.1109/CCGRID.2010.19","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.19","url":null,"abstract":"Reducing energy consumption for high end computing can bring various benefits such as, reduce operating costs, increase system reliability, and environment respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the task’s execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126682075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 238
Mobility Support Through Caching in Content-Based Publish/Subscribe Networks 在基于内容的发布/订阅网络中通过缓存实现移动性支持
Vasilis Sourlas, G. Paschos, P. Flegkas, L. Tassiulas
In a publish/subscribe (pub/sub) network, message delivery is guaranteed for all connected subscribers at publish time. However, in a dynamic mobile scenario where users join and leave the network, it is important that content published at the time they are disconnected is still delivered when they reconnect from a different point. In this paper, we enhance the caching mechanisms in pub/sub networks to enable client mobility. We build our mobility support with minor changes in the caching scheme while preserving the main principles of loose coupled and asynchronous communication of the pub/sub communication model. We also present a new proactive mechanism to reduce the overhead of duplicate responses. The evaluation of our proposed scheme is performed via simulations and testbed measurements.
在发布/订阅(pub/sub)网络中,保证在发布时为所有连接的订阅者传递消息。然而,在用户加入和离开网络的动态移动场景中,重要的是,当他们从不同的点重新连接时,在他们断开连接时发布的内容仍然可以传递。在本文中,我们增强了pub/sub网络中的缓存机制,以实现客户端移动性。我们在构建移动性支持时对缓存方案进行了微小的修改,同时保留了发布/订阅通信模型的松耦合和异步通信的主要原则。我们还提出了一种新的主动机制来减少重复响应的开销。通过仿真和试验台测量对我们提出的方案进行了评估。
{"title":"Mobility Support Through Caching in Content-Based Publish/Subscribe Networks","authors":"Vasilis Sourlas, G. Paschos, P. Flegkas, L. Tassiulas","doi":"10.1109/CCGRID.2010.22","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.22","url":null,"abstract":"In a publish/subscribe (pub/sub) network, message delivery is guaranteed for all connected subscribers at publish time. However, in a dynamic mobile scenario where users join and leave the network, it is important that content published at the time they are disconnected is still delivered when they reconnect from a different point. In this paper, we enhance the caching mechanisms in pub/sub networks to enable client mobility. We build our mobility support with minor changes in the caching scheme while preserving the main principles of loose coupled and asynchronous communication of the pub/sub communication model. We also present a new proactive mechanism to reduce the overhead of duplicate responses. The evaluation of our proposed scheme is performed via simulations and testbed measurements.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126735233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
An MPI-Stream Hybrid Programming Model for Computational Clusters 计算集群的mpi -流混合规划模型
E. Mancini, Gregory Marsh, D. Panda
The MPI programming model hides network type and topology from developers, but also allows them to seamlessly distribute a computational job across multiple cores in both an intra and inter node fashion. This provides for high locality performance when the cores are either on the same node or on nodes closely connected by the same network type. The streaming model splits a computational job into a linear chain of decoupled units. This decoupling allows the placement of job units on optimal nodes according to network topology. Furthermore, the links between these units can be of varying protocols when the application is distributed across a heterogeneous network. In this paper we study how to integrate the MPI and Stream programming models in order to exploit network locality and topology. We present a hybrid MPI-Stream framework that aims to take advantage of each model's strengths. We test our framework with a financial application. This application simulates an electronic market for a single financial instrument. A stream of buy and sell orders is fed into a price matching engine. The matching engine creates a stream of order confirmations, trade confirmations, and quotes based on its attempts to match buyers with sellers. Our results show that the hybrid MPI-Stream framework can deliver a 32% performance improvement at certain order transmission rates.
MPI编程模型对开发人员隐藏了网络类型和拓扑结构,但也允许他们以节点内和节点间的方式跨多个核心无缝地分发计算作业。当核心位于同一节点或通过相同网络类型紧密连接的节点上时,这提供了高局部性性能。流模型将计算任务分解为解耦单元的线性链。这种解耦允许根据网络拓扑在最优节点上放置作业单元。此外,当应用程序分布在异构网络中时,这些单元之间的链接可以采用不同的协议。本文研究了如何将MPI和流编程模型集成在一起,以利用网络局部性和拓扑结构。我们提出了一个混合MPI-Stream框架,旨在利用每个模型的优势。我们用一个金融应用程序测试我们的框架。这个应用程序模拟一个单一金融工具的电子市场。一连串的买卖指令被输入价格匹配引擎。匹配引擎创建订单确认流、交易确认流和报价流,这些都是基于匹配买家和卖家的尝试。我们的研究结果表明,混合MPI-Stream框架可以在一定的订单传输速率下提供32%的性能提升。
{"title":"An MPI-Stream Hybrid Programming Model for Computational Clusters","authors":"E. Mancini, Gregory Marsh, D. Panda","doi":"10.1109/CCGRID.2010.33","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.33","url":null,"abstract":"The MPI programming model hides network type and topology from developers, but also allows them to seamlessly distribute a computational job across multiple cores in both an intra and inter node fashion. This provides for high locality performance when the cores are either on the same node or on nodes closely connected by the same network type. The streaming model splits a computational job into a linear chain of decoupled units. This decoupling allows the placement of job units on optimal nodes according to network topology. Furthermore, the links between these units can be of varying protocols when the application is distributed across a heterogeneous network. In this paper we study how to integrate the MPI and Stream programming models in order to exploit network locality and topology. We present a hybrid MPI-Stream framework that aims to take advantage of each model's strengths. We test our framework with a financial application. This application simulates an electronic market for a single financial instrument. A stream of buy and sell orders is fed into a price matching engine. The matching engine creates a stream of order confirmations, trade confirmations, and quotes based on its attempts to match buyers with sellers. Our results show that the hybrid MPI-Stream framework can deliver a 32% performance improvement at certain order transmission rates.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124363426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Bandwidth Allocation for Iterative Data-Dependent E-science Applications 迭代数据依赖电子科学应用的带宽分配
Eun-Sung Jung, S. Ranka, S. Sahni
We develop a novel framework for supporting e-Science applications that require streaming of information between sites. Using a Synchronous Dataflow (SDF) model, our framework incorporates the communication times inherent in large scale distributed applications, and can be used to formulate the bandwidth allocation problem with throughput constraints as a multi-commodity linear programming problem. Our algorithms determine how much bandwidth is allocated to each edge while satisfying temporal constraints on collaborative tasks. Simulation results show that the bandwidth allocation by the formulated linear programming outperforms the bandwidth allocation by simple heuristics.
我们开发了一个新的框架来支持需要在站点之间进行信息流的电子科学应用程序。使用同步数据流(SDF)模型,我们的框架结合了大规模分布式应用程序固有的通信时间,并可用于将具有吞吐量约束的带宽分配问题表述为多商品线性规划问题。我们的算法确定了在满足协作任务的时间约束的情况下,每个边缘分配多少带宽。仿真结果表明,公式线性规划的带宽分配优于简单启发式的带宽分配。
{"title":"Bandwidth Allocation for Iterative Data-Dependent E-science Applications","authors":"Eun-Sung Jung, S. Ranka, S. Sahni","doi":"10.1109/CCGRID.2010.114","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.114","url":null,"abstract":"We develop a novel framework for supporting e-Science applications that require streaming of information between sites. Using a Synchronous Dataflow (SDF) model, our framework incorporates the communication times inherent in large scale distributed applications, and can be used to formulate the bandwidth allocation problem with throughput constraints as a multi-commodity linear programming problem. Our algorithms determine how much bandwidth is allocated to each edge while satisfying temporal constraints on collaborative tasks. Simulation results show that the bandwidth allocation by the formulated linear programming outperforms the bandwidth allocation by simple heuristics.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127872221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Expanding the Cloud: A Component-Based Architecture to Application Deployment on the Internet 扩展云:基于组件的体系结构到Internet上的应用程序部署
M. Wallis, F. Henskens, M. Hannaford
Cloud Computing allows us to abstract distributed, elastic IT resources behind an interface that promotes scalability and dynamic resource allocation. The boundary of this cloud sits outside the application and the hardware that hosts it. For the end user, a web application deployed on a cloud is presented no differently to a web application deployed on a stand-alone web server. This model works well for web applications but fails to cater for distributed applications containing components that execute both locally for the user and remotely using non-local resources. This research proposes extending the concept of the cloud to encompass not only server-farm resources but all resources accessible by the user. This brings the resources of the home PC and personal mobile devices into the cloud and promotes the deployment of highly-distributed component based applications with fat user interfaces. This promotes the use of the Internet itself as a platform. We compare this to the standard Web 2.0 approach and show the benefits that deploying fat-client component based systems provide over classic web applications. We also describe the benefits that expanding the cloud provides to component migration and resources utilisation.
云计算允许我们将分布式的、弹性的IT资源抽象到一个促进可伸缩性和动态资源分配的接口后面。这种云的边界位于应用程序和承载它的硬件之外。对于最终用户来说,部署在云上的web应用程序与部署在独立web服务器上的web应用程序没有什么不同。这个模型在web应用程序中工作得很好,但不能满足分布式应用程序中包含的组件,这些组件既可以在本地为用户执行,也可以使用非本地资源远程执行。本研究建议扩展云的概念,不仅包括服务器场资源,还包括用户可访问的所有资源。这将家庭PC和个人移动设备的资源带入云中,并促进了基于高度分布式组件的应用程序的部署,这些应用程序具有丰富的用户界面。这促进了互联网本身作为一个平台的使用。我们将其与标准Web 2.0方法进行比较,并展示部署基于胖客户机组件的系统相对于传统Web应用程序的优势。我们还描述了扩展云为组件迁移和资源利用提供的好处。
{"title":"Expanding the Cloud: A Component-Based Architecture to Application Deployment on the Internet","authors":"M. Wallis, F. Henskens, M. Hannaford","doi":"10.1109/CCGRID.2010.14","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.14","url":null,"abstract":"Cloud Computing allows us to abstract distributed, elastic IT resources behind an interface that promotes scalability and dynamic resource allocation. The boundary of this cloud sits outside the application and the hardware that hosts it. For the end user, a web application deployed on a cloud is presented no differently to a web application deployed on a stand-alone web server. This model works well for web applications but fails to cater for distributed applications containing components that execute both locally for the user and remotely using non-local resources. This research proposes extending the concept of the cloud to encompass not only server-farm resources but all resources accessible by the user. This brings the resources of the home PC and personal mobile devices into the cloud and promotes the deployment of highly-distributed component based applications with fat user interfaces. This promotes the use of the Internet itself as a platform. We compare this to the standard Web 2.0 approach and show the benefits that deploying fat-client component based systems provide over classic web applications. We also describe the benefits that expanding the cloud provides to component migration and resources utilisation.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128713019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Supporting OFED over Non-InfiniBand SANs 支持非ib san的OFED
Devesh Sharma
Open Fabrics Enterprise Distribution (OFED) is open-source software, committed to provide common communication stack to all RDMA capable System Area Networks (SANs). It supports high performance MPIs and legacy protocols for HPC domain and Data Centre community. Currently, it supports InfiniBand (IB) and Internet Wide Area RDMA Protocol (iWARP). This paper presents a technique to support OFED software stack over non-IB RDMA capable SAN. We propose the design of Virtual Management Port (VMP) to enable IB subnet management model. Integration of VMP with IB-Verbs interface driver prevents hardware and OFED modifications and enables connection manager that is mandatory to run user applications. The performance evaluation shows that VMP is lightweight.
Open Fabrics Enterprise Distribution (OFED)是开源软件,致力于为所有支持RDMA的系统区域网络(san)提供通用通信堆栈。它支持高性能mpi和HPC领域和数据中心社区的遗留协议。目前支持IB (InfiniBand)和iWARP (Internet Wide Area RDMA Protocol)协议。本文提出了一种在非ib RDMA SAN上支持OFED软件栈的技术。我们提出设计虚拟管理端口(VMP)来实现IB子网管理模型。VMP与IB-Verbs接口驱动程序的集成防止了硬件和OFED的修改,并支持运行用户应用程序所必需的连接管理器。性能评估表明VMP是轻量级的。
{"title":"Supporting OFED over Non-InfiniBand SANs","authors":"Devesh Sharma","doi":"10.1109/CCGRID.2010.62","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.62","url":null,"abstract":"Open Fabrics Enterprise Distribution (OFED) is open-source software, committed to provide common communication stack to all RDMA capable System Area Networks (SANs). It supports high performance MPIs and legacy protocols for HPC domain and Data Centre community. Currently, it supports InfiniBand (IB) and Internet Wide Area RDMA Protocol (iWARP). This paper presents a technique to support OFED software stack over non-IB RDMA capable SAN. We propose the design of Virtual Management Port (VMP) to enable IB subnet management model. Integration of VMP with IB-Verbs interface driver prevents hardware and OFED modifications and enables connection manager that is mandatory to run user applications. The performance evaluation shows that VMP is lightweight.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116889128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1