首页 > 最新文献

Proceedings 10th IEEE International Symposium on High Performance Distributed Computing最新文献

英文 中文
Applying Grid technologies to bioinformatics 网格技术在生物信息学中的应用
Michael Karo, Christopher Dwan, John L. Freeman, E. Retzel, J. Weissman, M. Livny
The science of bioinformatics provides researchers with the tools necessary to unravel the mysteries of life and evolution, discover cures for disease, and control the evolution of living organisms. To assist researchers in managing the growing data processing and management demands associated with bioinformatics, we have created a production system that draws upon Grid based technologies to control several aspects of the process. We briefly discuss system architecture, results, and future directions of the project.
生物信息学为研究人员提供了必要的工具来揭开生命和进化的奥秘,发现疾病的治疗方法,并控制生物体的进化。为了帮助研究人员管理与生物信息学相关的日益增长的数据处理和管理需求,我们创建了一个生产系统,该系统利用基于网格的技术来控制过程的几个方面。我们简要地讨论了系统架构、结果和项目的未来方向。
{"title":"Applying Grid technologies to bioinformatics","authors":"Michael Karo, Christopher Dwan, John L. Freeman, E. Retzel, J. Weissman, M. Livny","doi":"10.1109/HPDC.2001.945217","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945217","url":null,"abstract":"The science of bioinformatics provides researchers with the tools necessary to unravel the mysteries of life and evolution, discover cures for disease, and control the evolution of living organisms. To assist researchers in managing the growing data processing and management demands associated with bioinformatics, we have created a production system that draws upon Grid based technologies to control several aspects of the process. We briefly discuss system architecture, results, and future directions of the project.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129220925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Active yellow pages: a pipelined resource management architecture for wide-area network computing 活动黄页:用于广域网计算的流水线资源管理架构
D. Royo, L. D. D. Cerio, N. Kapadia, J. Fortes
This paper describes a novel, pipelined resource management architecture for computational grids. The design is based on two key realizations. One is that resource management involves a sequence of tasks that is best handled by a pipeline. As shown in the paper, this approach results in a scalable architecture for decentralized scheduling. The other realization is that static aggregation of resources for improved scheduling is inadequate in wide-area computing environments because the needs of users and jobs change with both, location and time. The described architecture addresses this problem by dynamically aggregating resources in a manner that continuously optimizes system response. This is accomplished by way of an active yellow pages directory that allows aggregation constraints to be (re)defined on the fly. An initial prototype of the active yellow pages service has been deployed in the PUNCH network computing environment. Experiences with the production PUNCH system and preliminary results from controlled experiments indicate that the active yellow pages service performs well.
本文描述了一种新的计算网格的流水线资源管理体系结构。该设计基于两个关键的实现。一是资源管理涉及到一系列任务,这些任务最好由管道来处理。如本文所示,这种方法为分散调度提供了可扩展的体系结构。另一种认识是,用于改进调度的静态资源聚合在广域计算环境中是不够的,因为用户和作业的需求会随着位置和时间的变化而变化。所描述的体系结构通过以不断优化系统响应的方式动态聚合资源来解决此问题。这是通过活动黄页目录实现的,该目录允许动态地(重新)定义聚合约束。活动黄页服务的初始原型已经部署在PUNCH网络计算环境中。生产系统的经验和对照实验的初步结果表明,主动黄页服务性能良好。
{"title":"Active yellow pages: a pipelined resource management architecture for wide-area network computing","authors":"D. Royo, L. D. D. Cerio, N. Kapadia, J. Fortes","doi":"10.1109/HPDC.2001.945185","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945185","url":null,"abstract":"This paper describes a novel, pipelined resource management architecture for computational grids. The design is based on two key realizations. One is that resource management involves a sequence of tasks that is best handled by a pipeline. As shown in the paper, this approach results in a scalable architecture for decentralized scheduling. The other realization is that static aggregation of resources for improved scheduling is inadequate in wide-area computing environments because the needs of users and jobs change with both, location and time. The described architecture addresses this problem by dynamically aggregating resources in a manner that continuously optimizes system response. This is accomplished by way of an active yellow pages directory that allows aggregation constraints to be (re)defined on the fly. An initial prototype of the active yellow pages service has been deployed in the PUNCH network computing environment. Experiences with the production PUNCH system and preliminary results from controlled experiments indicate that the active yellow pages service performs well.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"336 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123933709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Cooperative caching middleware for cluster-based servers 用于基于集群的服务器的协作缓存中间件
Francisco Matias Cuenca-Acuna, Thu D. Nguyen
Considers the use of cooperative caching to manage the memories of cluster-based servers. Over the last several years, a number of researchers have proposed content-aware servers that implement locality-conscious request distribution to address this memory management problem. During this development, it has become conventional wisdom that cooperative caching cannot match the performance of these servers. Unfortunately, while content-aware servers provide very high performance, their request distribution algorithms are typically bound to specific applications. The advantage of building distributed servers on top of a block-based cooperative caching layer is the generality of such a layer; it can be used as a building block for diverse services, ranging from file systems to web servers. In this paper, we reexamine the question of whether a server built on top of a generic block-based cooperative caching algorithm can perform competitively with content-aware servers. Specifically, we compare the performance of a cooperative caching-based Web server against L2S, a highly optimized locality- and load-conscious server. Our results show that, by modifying the replacement policy of traditional cooperative caching algorithms, we can achieve much of the performance provided by locality-conscious servers. Our modification increases network communication to reduce disk accesses, a reasonable trade-off considering the current trend of relative performance between LANs and disks.
考虑使用协作缓存来管理基于集群的服务器的内存。在过去的几年中,许多研究人员提出了实现位置感知请求分发的内容感知服务器来解决这个内存管理问题。在此开发过程中,协作缓存无法匹配这些服务器的性能已经成为一种共识。不幸的是,虽然内容感知服务器提供了非常高的性能,但它们的请求分发算法通常绑定到特定的应用程序。在基于块的协作缓存层之上构建分布式服务器的优势在于该层的通用性;它可以用作各种服务的构建块,从文件系统到web服务器。在本文中,我们重新审视了建立在通用的基于块的协作缓存算法之上的服务器是否能够与内容感知服务器竞争的问题。具体来说,我们比较了基于协作缓存的Web服务器与L2S的性能,L2S是一种高度优化的位置和负载敏感服务器。我们的研究结果表明,通过修改传统协作缓存算法的替换策略,我们可以实现位置感知服务器提供的大部分性能。我们的修改增加了网络通信以减少磁盘访问,考虑到当前局域网和磁盘之间的相对性能趋势,这是一种合理的权衡。
{"title":"Cooperative caching middleware for cluster-based servers","authors":"Francisco Matias Cuenca-Acuna, Thu D. Nguyen","doi":"10.1109/HPDC.2001.945198","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945198","url":null,"abstract":"Considers the use of cooperative caching to manage the memories of cluster-based servers. Over the last several years, a number of researchers have proposed content-aware servers that implement locality-conscious request distribution to address this memory management problem. During this development, it has become conventional wisdom that cooperative caching cannot match the performance of these servers. Unfortunately, while content-aware servers provide very high performance, their request distribution algorithms are typically bound to specific applications. The advantage of building distributed servers on top of a block-based cooperative caching layer is the generality of such a layer; it can be used as a building block for diverse services, ranging from file systems to web servers. In this paper, we reexamine the question of whether a server built on top of a generic block-based cooperative caching algorithm can perform competitively with content-aware servers. Specifically, we compare the performance of a cooperative caching-based Web server against L2S, a highly optimized locality- and load-conscious server. Our results show that, by modifying the replacement policy of traditional cooperative caching algorithms, we can achieve much of the performance provided by locality-conscious servers. Our modification increases network communication to reduce disk accesses, a reasonable trade-off considering the current trend of relative performance between LANs and disks.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"95 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125982395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Interfacing parallel jobs to process managers 将并行作业连接到流程管理器
B. Toonen, David Ashton, E. Lusk, Ian T Foster, W. Gropp, E. Gabriel, R. Butler, N. Karonis
A variety of projects worldwide are developing what we call "heterogeneous MPI". These MPI implementations are designed to operate on multiple computers, perhaps of different types, ranging in complexity from a set of desktop workstations to several supercomputers connected via a wide area network. These considerations led us to investigate the feasibility of defining a common API that could be used within MPI implementations to access process startup, initialization, monitoring, and control functions provided by an underlying process management system. If various MPI implementations coded to that API, one could then develop multiple "process management" modules that could be reused within different MPI implementations, thus allowing partitioning of effort between different development groups. In pursuit of this goal, we have designed such an API, which we call BNR. The major goals of the BNR interface are outlined.
世界各地的各种项目都在开发我们所说的“异构MPI”。这些MPI实现被设计为在多台计算机上运行,这些计算机可能是不同类型的,其复杂性从一组桌面工作站到通过广域网连接的几台超级计算机不等。这些考虑使我们研究了定义一个通用API的可行性,该API可以在MPI实现中使用,以访问由底层进程管理系统提供的进程启动、初始化、监视和控制功能。如果各种MPI实现对该API进行编码,则可以开发多个“流程管理”模块,这些模块可以在不同的MPI实现中重用,从而允许在不同的开发组之间划分工作。为了实现这一目标,我们设计了这样一个API,我们称之为BNR。概述了BNR接口的主要目标。
{"title":"Interfacing parallel jobs to process managers","authors":"B. Toonen, David Ashton, E. Lusk, Ian T Foster, W. Gropp, E. Gabriel, R. Butler, N. Karonis","doi":"10.1109/HPDC.2001.945212","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945212","url":null,"abstract":"A variety of projects worldwide are developing what we call \"heterogeneous MPI\". These MPI implementations are designed to operate on multiple computers, perhaps of different types, ranging in complexity from a set of desktop workstations to several supercomputers connected via a wide area network. These considerations led us to investigate the feasibility of defining a common API that could be used within MPI implementations to access process startup, initialization, monitoring, and control functions provided by an underlying process management system. If various MPI implementations coded to that API, one could then develop multiple \"process management\" modules that could be reused within different MPI implementations, thus allowing partitioning of effort between different development groups. In pursuit of this goal, we have designed such an API, which we call BNR. The major goals of the BNR interface are outlined.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114799175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Logistical Session Layer 逻辑会话层
D. M. Swany, R. Wolski
The Logistical Session Layer is a system to enable enhanced functionality to distributed programming systems. The term Logistical refers to the fact that we enhance the traditional client-server model to allow for intermediate systems which are neither. This system generalizes the notion of caches but represents a cleaner architecture in that it explicitly declares itself to be a session layer protocol.
逻辑会话层是一个为分布式编程系统提供增强功能的系统。术语“后勤”指的是这样一个事实:我们增强了传统的客户机-服务器模型,以允许两者都不是的中间系统。该系统概括了缓存的概念,但表示了一种更清晰的体系结构,因为它显式地将自己声明为会话层协议。
{"title":"The Logistical Session Layer","authors":"D. M. Swany, R. Wolski","doi":"10.1109/HPDC.2001.945218","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945218","url":null,"abstract":"The Logistical Session Layer is a system to enable enhanced functionality to distributed programming systems. The term Logistical refers to the fact that we enhance the traditional client-server model to allow for intermediate systems which are neither. This system generalizes the notion of caches but represents a cleaner architecture in that it explicitly declares itself to be a session layer protocol.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121991656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Adaptable mirroring in cluster servers 集群服务器中的可适应镜像
Ada Gavrilovska, K. Schwan, Van Oleson
This paper presents a software architecture for continuously mirroring streaming data received by one node of a cluster-based server to other cluster nodes. The intent is to distribute the load on the server generated by the data's processing and distribution to many clients. This is particularly important when the server not only processes streaming data, but also performs additional processing tasks that heavily depend on current application state. One such task is the preparation of suitable initialization state for thin clients, so that such clients can understand future data events being streamed to them. In particular, when large numbers of thin clients must be initialized at the same time, initialization must be performed without jeopardizing the quality of service offered to regular clients continuing to receive data streams. The mirroring framework presented and evaluated has several novel aspects. First, by performing mirroring at the middleware level, application semantics may be used to reduce mirroring traffic, including filtering events based on their content, by coalescing certain events, or by simply varying mirroring rates according to current application needs concerning the consistencies of mirrored vs. original data. Second, we present an adaptive algorithm that varies mirror consistency and thereby, mirroring overheads in response to changes in clients' request behavior. Third, our framework not only mirrors events, but it can also mirror the new states computed from incoming events, thus enabling dynamic tradeoffs in the communication vs. computation loads imposed on the server node receiving events and on its mirror nodes.
本文提出了一种将集群服务器的一个节点接收到的流数据连续镜像到其他集群节点的软件体系结构。其目的是将由数据处理和分发生成的服务器上的负载分配给许多客户机。当服务器不仅处理流数据,而且还执行严重依赖于当前应用程序状态的附加处理任务时,这一点尤其重要。其中一项任务是为瘦客户机准备合适的初始化状态,这样这样的客户机就可以理解将来流式传输给它们的数据事件。特别是,当必须同时初始化大量瘦客户机时,初始化的执行必须不影响提供给继续接收数据流的常规客户机的服务质量。所提出和评估的镜像框架有几个新颖的方面。首先,通过在中间件级别执行镜像,可以使用应用程序语义来减少镜像流量,包括根据内容过滤事件,通过合并某些事件,或者根据当前应用程序对镜像数据与原始数据一致性的需求简单地改变镜像速率。其次,我们提出了一种自适应算法,该算法可以改变镜像一致性,从而在响应客户端请求行为的变化时镜像开销。第三,我们的框架不仅可以镜像事件,还可以镜像从传入事件中计算出的新状态,从而在通信与施加在接收事件的服务器节点及其镜像节点上的计算负载之间实现动态权衡。
{"title":"Adaptable mirroring in cluster servers","authors":"Ada Gavrilovska, K. Schwan, Van Oleson","doi":"10.1109/HPDC.2001.945171","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945171","url":null,"abstract":"This paper presents a software architecture for continuously mirroring streaming data received by one node of a cluster-based server to other cluster nodes. The intent is to distribute the load on the server generated by the data's processing and distribution to many clients. This is particularly important when the server not only processes streaming data, but also performs additional processing tasks that heavily depend on current application state. One such task is the preparation of suitable initialization state for thin clients, so that such clients can understand future data events being streamed to them. In particular, when large numbers of thin clients must be initialized at the same time, initialization must be performed without jeopardizing the quality of service offered to regular clients continuing to receive data streams. The mirroring framework presented and evaluated has several novel aspects. First, by performing mirroring at the middleware level, application semantics may be used to reduce mirroring traffic, including filtering events based on their content, by coalescing certain events, or by simply varying mirroring rates according to current application needs concerning the consistencies of mirrored vs. original data. Second, we present an adaptive algorithm that varies mirror consistency and thereby, mirroring overheads in response to changes in clients' request behavior. Third, our framework not only mirrors events, but it can also mirror the new states computed from incoming events, thus enabling dynamic tradeoffs in the communication vs. computation loads imposed on the server node receiving events and on its mirror nodes.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128396498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A study of deadline scheduling for client-server systems on the Computational Grid 计算网格下客户机-服务器系统的期限调度研究
A. Takefusa, S. Matsuoka, H. Casanova, F. Berman
The Computational Grid is a promising platform for the deployment of various high-performance computing applications. A number of projects have addressed the idea of software as a service on the network. These systems usually implement client-server architectures with many servers running on distributed Grid resources and have commonly been referred to as network-enabled servers (NES). An important question is that of scheduling in this multi-client multi-server scenario. Note that in this context most requests are computationally intensive as they are generated by high-performance computing applications. The Bricks simulation framework has been developed and extensively used to evaluate scheduling strategies for NES systems. The authors first present recent developments and extensions to the Bricks simulation models. They discuss a deadline scheduling strategy that is appropriate for the multi-client multi-server case, and augment it with "Load Correction" and "Fallback" mechanisms which could improve the performance of the algorithm. We then give Bricks simulation results. The results show that future NES systems should use deadline scheduling with multiple fallbacks and it is possible to allow users to make a trade-off between failure-rate and cost by adjusting the level of conservatism of deadline scheduling algorithms.
计算网格是部署各种高性能计算应用程序的一个很有前途的平台。许多项目都将软件作为网络上的一种服务。这些系统通常实现客户机-服务器体系结构,其中许多服务器运行在分布式网格资源上,通常被称为支持网络的服务器(NES)。一个重要的问题是这个多客户机多服务器场景中的调度问题。请注意,在这种情况下,大多数请求都是计算密集型的,因为它们是由高性能计算应用程序生成的。Bricks仿真框架已被开发并广泛用于评估NES系统的调度策略。作者首先介绍了Bricks仿真模型的最新发展和扩展。他们讨论了一种适用于多客户端多服务器情况的截止日期调度策略,并通过“负载校正”和“回退”机制来增强该策略,从而提高算法的性能。然后给出了Bricks的仿真结果。结果表明,未来的网元系统应该使用带多个回退的截止日期调度,并且可以通过调整截止日期调度算法的保守程度来允许用户在故障率和成本之间进行权衡。
{"title":"A study of deadline scheduling for client-server systems on the Computational Grid","authors":"A. Takefusa, S. Matsuoka, H. Casanova, F. Berman","doi":"10.1109/HPDC.2001.945208","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945208","url":null,"abstract":"The Computational Grid is a promising platform for the deployment of various high-performance computing applications. A number of projects have addressed the idea of software as a service on the network. These systems usually implement client-server architectures with many servers running on distributed Grid resources and have commonly been referred to as network-enabled servers (NES). An important question is that of scheduling in this multi-client multi-server scenario. Note that in this context most requests are computationally intensive as they are generated by high-performance computing applications. The Bricks simulation framework has been developed and extensively used to evaluate scheduling strategies for NES systems. The authors first present recent developments and extensions to the Bricks simulation models. They discuss a deadline scheduling strategy that is appropriate for the multi-client multi-server case, and augment it with \"Load Correction\" and \"Fallback\" mechanisms which could improve the performance of the algorithm. We then give Bricks simulation results. The results show that future NES systems should use deadline scheduling with multiple fallbacks and it is possible to allow users to make a trade-off between failure-rate and cost by adjusting the level of conservatism of deadline scheduling algorithms.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130507772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 102
Middleware support for global access to integrated computational collaboratories 中间件支持对集成计算协作实验室的全局访问
Vijay Mann, M. Parashar
The growth of the Internet and the advent of the computational "Grid" have made it possible to develop and deploy advanced computational collaboratories. These systems build on high-end computational resources and communication technologies underlying the Grid, and provide seamless and collaborative access to particular resources, services or applications. Integrating these "focused" collaboratories presents significant challenges. Key among these is the design and development of robust middleware support that addresses scalability, service discovery, security and access control, and interaction and collaboration management for consistent access. The authors first investigate the architecture of such a middleware that enables global (Web-based) access to collaboratories. They then present the design and implementation of a middleware substrate that enables a peer-to-peer integration of and global (collaborative) access to geographically distributed instances of the DISCOVER computational collaboratory for interaction and steering.
因特网的发展和计算“网格”的出现使得开发和部署先进的计算协作实验室成为可能。这些系统建立在网格底层的高端计算资源和通信技术之上,并提供对特定资源、服务或应用程序的无缝和协作访问。整合这些“集中”的合作提出了重大的挑战。其中的关键是设计和开发健壮的中间件支持,以解决可伸缩性、服务发现、安全性和访问控制以及一致访问的交互和协作管理。作者首先研究了这种中间件的体系结构,这种中间件支持对协作实验室的全局(基于web的)访问。然后,他们提出了中间件基础的设计和实现,该基础支持对DISCOVER计算协作实验室的地理分布实例的点对点集成和全局(协作)访问,以进行交互和指导。
{"title":"Middleware support for global access to integrated computational collaboratories","authors":"Vijay Mann, M. Parashar","doi":"10.1109/HPDC.2001.945174","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945174","url":null,"abstract":"The growth of the Internet and the advent of the computational \"Grid\" have made it possible to develop and deploy advanced computational collaboratories. These systems build on high-end computational resources and communication technologies underlying the Grid, and provide seamless and collaborative access to particular resources, services or applications. Integrating these \"focused\" collaboratories presents significant challenges. Key among these is the design and development of robust middleware support that addresses scalability, service discovery, security and access control, and interaction and collaboration management for consistent access. The authors first investigate the architecture of such a middleware that enables global (Web-based) access to collaboratories. They then present the design and implementation of a middleware substrate that enables a peer-to-peer integration of and global (collaborative) access to geographically distributed instances of the DISCOVER computational collaboratory for interaction and steering.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"14 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114009986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Reducing delay with dynamic selection of compression formats 通过动态选择压缩格式来减少延迟
C. Krintz, B. Calder
Internet computing is facilitated by a remote execution methodology in which programs transfer to a destination for execution. Since the transfer time can substantially degrade the performance of remotely executed (mobile) programs, file compression is used to reduce the amount of data that is transferred. Compression techniques however, must trade off compression ratio for decompression time, due to the algorithmic complexity of the former, since the latter is performed at run-time in this environment. In this paper, we define the total delay as the time for both the transfer and the decompression of a compressed file. To minimize the total delay, a mobile program should be compressed in the best format for minimizing the delay. Since both the transfer time and the decompression time are dependent upon the current underlying resource performance, selection of the "best" format varies and no one compression format minimizes the total delay for all resource performance characteristics. We present a system called Dynamic Compression Format Selection (DCFS) for the automatic and dynamic selection of competitive compression formats based on the predicted values of future resource performance. Our results show that DCFS reduces the total delay imposed by the compressed transfer of Java archives (.jar files) by 52% on average for the networks, compression techniques and benchmarks studied.
远程执行方法促进了Internet计算,在这种方法中,程序传输到目的地执行。由于传输时间会大大降低远程执行(移动)程序的性能,因此使用文件压缩来减少传输的数据量。然而,由于前者的算法复杂性,压缩技术必须在压缩比和解压缩时间之间进行权衡,因为后者是在此环境中的运行时执行的。在本文中,我们将总延迟定义为传输和解压压缩文件的时间。为了最小化总延迟,移动程序应该以最小化延迟的最佳格式进行压缩。由于传输时间和解压缩时间都取决于当前的底层资源性能,因此选择“最佳”格式是不同的,并且没有一种压缩格式能够最小化所有资源性能特征的总延迟。本文提出了一种动态压缩格式选择系统(DCFS),用于根据未来资源性能的预测值自动动态选择竞争性压缩格式。我们的结果表明,对于所研究的网络、压缩技术和基准测试,DCFS将Java存档(.jar文件)压缩传输所带来的总延迟平均减少了52%。
{"title":"Reducing delay with dynamic selection of compression formats","authors":"C. Krintz, B. Calder","doi":"10.1109/HPDC.2001.945195","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945195","url":null,"abstract":"Internet computing is facilitated by a remote execution methodology in which programs transfer to a destination for execution. Since the transfer time can substantially degrade the performance of remotely executed (mobile) programs, file compression is used to reduce the amount of data that is transferred. Compression techniques however, must trade off compression ratio for decompression time, due to the algorithmic complexity of the former, since the latter is performed at run-time in this environment. In this paper, we define the total delay as the time for both the transfer and the decompression of a compressed file. To minimize the total delay, a mobile program should be compressed in the best format for minimizing the delay. Since both the transfer time and the decompression time are dependent upon the current underlying resource performance, selection of the \"best\" format varies and no one compression format minimizes the total delay for all resource performance characteristics. We present a system called Dynamic Compression Format Selection (DCFS) for the automatic and dynamic selection of competitive compression formats based on the predicted values of future resource performance. Our results show that DCFS reduces the total delay imposed by the compressed transfer of Java archives (.jar files) by 52% on average for the networks, compression techniques and benchmarks studied.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"463 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127005707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
QoS-aware dependency management for component-based systems 基于组件的系统的qos依赖性管理
Yi Cui, K. Nahrstedt
Building and dynamically configuring component-based systems is an important topic in distributed systems and ubiquitous computing. However, the systematic and automatic configuration management remains a challenging problem for the following reasons: (1) QoS-enforced service delivery demands to maximize the system performance out of the best configuration, (2) dynamically varied resource availability in the distributed environment makes it desirable to achieve the optimized system resource consumption. We present a graph-based dependency management model to address the above problems. Our model integrates the management of inter-component functional dependency, including consistency checking and automatic system configuration, as well as QoS-aware resource dependency management. Based on the model, we present a pruning-based configuration selection algorithm, which is able to consistently optimize the system resource consumption, while preserving the QoS level in a heteregeneous environment. Our initial simulation results prove the soundness of our model and algorithm.
构建和动态配置基于组件的系统是分布式系统和泛在计算中的一个重要课题。然而,系统和自动配置管理仍然是一个具有挑战性的问题,原因如下:(1)qos强制的服务交付需要在最佳配置中最大化系统性能;(2)分布式环境中动态变化的资源可用性需要实现系统资源消耗的优化。我们提出了一个基于图的依赖关系管理模型来解决上述问题。我们的模型集成了组件间功能依赖的管理,包括一致性检查和自动系统配置,以及qos感知的资源依赖管理。在此基础上,提出了一种基于剪枝的配置选择算法,该算法能够在异构环境下持续优化系统资源消耗,同时保持QoS水平。初步的仿真结果证明了模型和算法的正确性。
{"title":"QoS-aware dependency management for component-based systems","authors":"Yi Cui, K. Nahrstedt","doi":"10.1109/HPDC.2001.945183","DOIUrl":"https://doi.org/10.1109/HPDC.2001.945183","url":null,"abstract":"Building and dynamically configuring component-based systems is an important topic in distributed systems and ubiquitous computing. However, the systematic and automatic configuration management remains a challenging problem for the following reasons: (1) QoS-enforced service delivery demands to maximize the system performance out of the best configuration, (2) dynamically varied resource availability in the distributed environment makes it desirable to achieve the optimized system resource consumption. We present a graph-based dependency management model to address the above problems. Our model integrates the management of inter-component functional dependency, including consistency checking and automatic system configuration, as well as QoS-aware resource dependency management. Based on the model, we present a pruning-based configuration selection algorithm, which is able to consistently optimize the system resource consumption, while preserving the QoS level in a heteregeneous environment. Our initial simulation results prove the soundness of our model and algorithm.","PeriodicalId":304683,"journal":{"name":"Proceedings 10th IEEE International Symposium on High Performance Distributed Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125070798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
Proceedings 10th IEEE International Symposium on High Performance Distributed Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1