首页 > 最新文献

Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)最新文献

英文 中文
Forecasting network performance to support dynamic scheduling using the network weather service 使用网络天气服务预测网络性能以支持动态调度
R. Wolski
The Network Weather Service is a generalizable and extensible facility designed to provide dynamic resource performance forecasts in metacomputing environments. In this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling, and by the metacomputing software infrastructure to develop quality-of-service guarantees.
网络天气服务是一种可通用和可扩展的设施,旨在提供元计算环境中的动态资源性能预报。在本文中,我们概述了它的设计,并详细介绍了它产生的预测的预测性能。虽然预测方法是通用的,但我们关注的是它们预测TCP/IP端到端吞吐量和延迟的能力,这是使用位于不同站点的系统的应用程序可以实现的。这样的网络预测既需要支持调度,也需要元计算软件基础设施来开发服务质量保证。
{"title":"Forecasting network performance to support dynamic scheduling using the network weather service","authors":"R. Wolski","doi":"10.1109/HPDC.1997.626437","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626437","url":null,"abstract":"The Network Weather Service is a generalizable and extensible facility designed to provide dynamic resource performance forecasts in metacomputing environments. In this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling, and by the metacomputing software infrastructure to develop quality-of-service guarantees.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114485670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 309
Supporting parallel applications on clusters of workstations: The intelligent network interface approach 在工作站集群上支持并行应用:智能网络接口方法
Marcel-Catalin Rosu, K. Schwan, R. Fujimoto
This paper presents a novel networking architecture designed for communication intensive parallel applications running on clusters of workstations (COWs) connected by high speed network. This architecture permits: (1) the transfer of selected communication-related functionality the host machine to the network interface coprocessor and (2) the exposure of this functionality directly to applications as instructions of a Virtual Communication Machine (VCM) implemented by the coprocessor. The user-level code interacts directly with the network coprocessor as the host kernel only 'connects' the application to the VCM and does not participate in the data transfers. The distinctive feature of our design is its flexibility: the integration of the network with the application can be varied to maximize performance. The resulting communication architecture is characterized by a very low overhead on the host processor by latency and bandwidth close to the hardware limits, and by an application interface which enables zero-copy messaging and eases the port of some shared-memory parallel applications to COWs. The architecture admits low cost implementations based only on off-the-shelf hardware components. Additionally, its current ATM-based implementation can be used to communicate with any ATM-enabled host.
本文提出了一种新的网络架构,用于在高速网络连接的工作站集群上运行通信密集型并行应用。这种架构允许:(1)将选定的与通信相关的功能从主机转移到网络接口协处理器,(2)将该功能作为由协处理器实现的虚拟通信机(VCM)的指令直接暴露给应用程序。用户级代码直接与网络协处理器交互,因为主机内核只将应用程序“连接”到VCM,而不参与数据传输。我们设计的显著特点是它的灵活性:网络与应用程序的集成可以变化,以最大限度地提高性能。由此产生的通信体系结构的特点是,主机处理器的开销非常低,延迟和带宽接近硬件限制,并且通过应用程序接口支持零复制消息传递,并简化了一些共享内存并行应用程序到cow的端口。该体系结构允许仅基于现成硬件组件的低成本实现。此外,它当前基于atm的实现可用于与任何启用atm的主机通信。
{"title":"Supporting parallel applications on clusters of workstations: The intelligent network interface approach","authors":"Marcel-Catalin Rosu, K. Schwan, R. Fujimoto","doi":"10.1109/HPDC.1997.622372","DOIUrl":"https://doi.org/10.1109/HPDC.1997.622372","url":null,"abstract":"This paper presents a novel networking architecture designed for communication intensive parallel applications running on clusters of workstations (COWs) connected by high speed network. This architecture permits: (1) the transfer of selected communication-related functionality the host machine to the network interface coprocessor and (2) the exposure of this functionality directly to applications as instructions of a Virtual Communication Machine (VCM) implemented by the coprocessor. The user-level code interacts directly with the network coprocessor as the host kernel only 'connects' the application to the VCM and does not participate in the data transfers. The distinctive feature of our design is its flexibility: the integration of the network with the application can be varied to maximize performance. The resulting communication architecture is characterized by a very low overhead on the host processor by latency and bandwidth close to the hardware limits, and by an application interface which enables zero-copy messaging and eases the port of some shared-memory parallel applications to COWs. The architecture admits low cost implementations based only on off-the-shelf hardware components. Additionally, its current ATM-based implementation can be used to communicate with any ATM-enabled host.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"353 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120968303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Distributed polyphonic music synthesis 分布式复调音乐合成
J. Williams, M. Clement
Music synthesis often relies on very computationally intensive algorithms. Various strategies have been used to deal with the complexity, including using simpler, but more limited algorithms, using specialized hardware, and executing them in non-real-time for later playback. Although several implementations using parallel hardware have been done, very little has been done with distributed implementations on clusters of workstations. Distributed music synthesis is typical of distributed multimedia applications which use multiple servers to do computations generating high-bandwidth audio/video data, based on low-bandwidth control information. This work demonstrates distributed music synthesis and describes the effects of using different communication protocols and networks. The implementation is a version of the Csound music synthesis package which has been modified to distribute the synthesis load to multiple servers. The network performance should also be applicable to applications which use a high-bandwidth pipeline of processes, which would be appropriate for audio and video post-processing.
音乐合成通常依赖于计算量非常大的算法。已经使用了各种策略来处理复杂性,包括使用更简单但更有限的算法,使用专用硬件,并以非实时方式执行它们以供以后回放。虽然已经完成了几个使用并行硬件的实现,但是在工作站集群上的分布式实现所做的很少。分布式音乐合成是典型的分布式多媒体应用程序,它使用多个服务器进行计算,生成基于低带宽控制信息的高带宽音频/视频数据。这项工作演示了分布式音乐合成,并描述了使用不同的通信协议和网络的效果。这个实现是Csound音乐合成包的一个版本,它被修改为将合成负载分发到多个服务器。网络性能也应该适用于使用高带宽流程的应用程序,这将适合音频和视频后处理。
{"title":"Distributed polyphonic music synthesis","authors":"J. Williams, M. Clement","doi":"10.1109/HPDC.1997.622359","DOIUrl":"https://doi.org/10.1109/HPDC.1997.622359","url":null,"abstract":"Music synthesis often relies on very computationally intensive algorithms. Various strategies have been used to deal with the complexity, including using simpler, but more limited algorithms, using specialized hardware, and executing them in non-real-time for later playback. Although several implementations using parallel hardware have been done, very little has been done with distributed implementations on clusters of workstations. Distributed music synthesis is typical of distributed multimedia applications which use multiple servers to do computations generating high-bandwidth audio/video data, based on low-bandwidth control information. This work demonstrates distributed music synthesis and describes the effects of using different communication protocols and networks. The implementation is a version of the Csound music synthesis package which has been modified to distribute the synthesis load to multiple servers. The network performance should also be applicable to applications which use a high-bandwidth pipeline of processes, which would be appropriate for audio and video post-processing.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121186156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Utilizing heterogeneous networks in distributed parallel computing systems 在分布式并行计算系统中利用异构网络
JunSeong Kim, D. Lilja
Heterogeneity is becoming quite common in distributed parallel computing systems, both in processor architectures and in communication networks. Different types of networks have different performance characteristics, while different types of messages may have different communication requirements. In this work, we analyze two techniques for exploiting these heterogeneous characteristics and requirements to reduce the communication overhead of parallel application programs executed on distributed computing systems. The performance based path selection (PBPS) technique selects the best (lowest latency) network among several for each individual message, while the second technique aggregates multiple networks into a single virtual network. We present a general approach for applying and evaluating these techniques to a distributed computing system with multiple interprocessor communication networks. We also generate performance curves for a cluster of IBM workstations interconnected with Ethernet, ATM, and Fibre Channel networks. As we show with several of the NAS benchmarks, these curves can be used to estimate the potential improvement in communication performance that can be obtained with these techniques, given some simple communication characteristics of an application program.
异构性在分布式并行计算系统中变得越来越普遍,无论是在处理器体系结构中还是在通信网络中。不同类型的网络具有不同的性能特征,而不同类型的消息可能具有不同的通信需求。在这项工作中,我们分析了两种利用这些异构特征和需求的技术,以减少在分布式计算系统上执行的并行应用程序的通信开销。基于性能的路径选择(PBPS)技术为每个单独的消息在几个网络中选择最佳(最低延迟)的网络,而第二种技术将多个网络聚合到单个虚拟网络中。我们提出了一种将这些技术应用和评估到具有多个处理器间通信网络的分布式计算系统的一般方法。我们还为与以太网、ATM和光纤通道网络互连的IBM工作站集群生成性能曲线。正如我们在几个NAS基准测试中所展示的那样,这些曲线可用于估计在给定应用程序的一些简单通信特性的情况下,使用这些技术可以获得的通信性能的潜在改进。
{"title":"Utilizing heterogeneous networks in distributed parallel computing systems","authors":"JunSeong Kim, D. Lilja","doi":"10.1109/HPDC.1997.626440","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626440","url":null,"abstract":"Heterogeneity is becoming quite common in distributed parallel computing systems, both in processor architectures and in communication networks. Different types of networks have different performance characteristics, while different types of messages may have different communication requirements. In this work, we analyze two techniques for exploiting these heterogeneous characteristics and requirements to reduce the communication overhead of parallel application programs executed on distributed computing systems. The performance based path selection (PBPS) technique selects the best (lowest latency) network among several for each individual message, while the second technique aggregates multiple networks into a single virtual network. We present a general approach for applying and evaluating these techniques to a distributed computing system with multiple interprocessor communication networks. We also generate performance curves for a cluster of IBM workstations interconnected with Ethernet, ATM, and Fibre Channel networks. As we show with several of the NAS benchmarks, these curves can be used to estimate the potential improvement in communication performance that can be obtained with these techniques, given some simple communication characteristics of an application program.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115413566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Predicting slowdown for networked workstations 预测网络工作站的减速
S. Figueira, F. Berman
Most applications share the resources of networked workstations with other applications. Since system load can vary dramatically, allocation strategies that assume that resources have a constant availability and/or capability are unlikely to promote performance-efficient allocations in practice. In order to best allocate application tasks to machines, it is critical to provide a realistic model of the effects of contention on application performance. In this paper, we present a model that provides an estimate of the slowdown imposed by competing load on applications targeted to high-performance clusters and networks of workstations. The model provides a basis for predicting realistic communication and computation costs and is shown to achieve good accuracy for a set of scientific benchmarks commonly found in high-performance applications.
大多数应用程序与其他应用程序共享联网工作站的资源。由于系统负载可能变化很大,因此假设资源具有恒定可用性和/或能力的分配策略在实践中不太可能促进性能高效的分配。为了最好地将应用程序任务分配给机器,提供争用对应用程序性能影响的真实模型至关重要。在本文中,我们提出了一个模型,该模型提供了对针对高性能集群和工作站网络的应用程序的竞争负载所施加的减速的估计。该模型为预测实际的通信和计算成本提供了基础,并且在高性能应用程序中常见的一组科学基准测试中显示出良好的准确性。
{"title":"Predicting slowdown for networked workstations","authors":"S. Figueira, F. Berman","doi":"10.1109/HPDC.1997.622366","DOIUrl":"https://doi.org/10.1109/HPDC.1997.622366","url":null,"abstract":"Most applications share the resources of networked workstations with other applications. Since system load can vary dramatically, allocation strategies that assume that resources have a constant availability and/or capability are unlikely to promote performance-efficient allocations in practice. In order to best allocate application tasks to machines, it is critical to provide a realistic model of the effects of contention on application performance. In this paper, we present a model that provides an estimate of the slowdown imposed by competing load on applications targeted to high-performance clusters and networks of workstations. The model provides a basis for predicting realistic communication and computation costs and is shown to achieve good accuracy for a set of scientific benchmarks commonly found in high-performance applications.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116930467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Packing messages as a tool for boosting the performance of total ordering protocols 打包消息作为提高总排序协议性能的工具
R. Friedman, R. V. Renesse
This paper compares the throughput and latency of four protocols that provide total ordering. Two of these protocols are measured with and without message packing. We used a technique that buffers application messages for a short period of time before sending them, so more messages are packed together. The main conclusion of this comparison is that message packing influences the performance of total ordering protocols under high load overwhelmingly more than any other optimization that was checked in this paper, both in terms of throughput and latency. This improved performance is attributed to the fact that packing messages reduces the header overhead for messages, the contention on the network, and the load on the receiving CPUs.
本文比较了四种提供总排序的协议的吞吐量和延迟。其中两个协议分别在消息打包和不打包的情况下进行了测量。我们使用了一种技术,在发送应用程序消息之前将它们缓冲一小段时间,这样就可以将更多的消息打包在一起。这个比较的主要结论是,在高负载下,消息打包对总排序协议性能的影响远远超过本文所研究的任何其他优化,无论是在吞吐量还是延迟方面。这种性能的提高是由于打包消息减少了消息的报头开销、网络上的争用和接收cpu上的负载。
{"title":"Packing messages as a tool for boosting the performance of total ordering protocols","authors":"R. Friedman, R. V. Renesse","doi":"10.1109/HPDC.1997.626423","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626423","url":null,"abstract":"This paper compares the throughput and latency of four protocols that provide total ordering. Two of these protocols are measured with and without message packing. We used a technique that buffers application messages for a short period of time before sending them, so more messages are packed together. The main conclusion of this comparison is that message packing influences the performance of total ordering protocols under high load overwhelmingly more than any other optimization that was checked in this paper, both in terms of throughput and latency. This improved performance is attributed to the fact that packing messages reduces the header overhead for messages, the contention on the network, and the load on the receiving CPUs.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126352635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 109
Distributed-thread scheduling methods for reducing page-thrashing 减少页面抖动的分布式线程调度方法
Yoshiaki Sudo, Shigeo Suzuki, Shigeki Shibayama
Although distributed threads on distributed shared memory (DSM) provide an easy programming model for distributed computer systems, it is not easy to build a high performance system with them, because a software DSM system is prone to page-thrashing. One way to reduce page-thrashing is to utilize thread migration, which leads to changes in page access patterns on DSM. In this paper, we propose thread scheduling methods based upon page access information and discuss an analytical model for evaluating this information. Then, we describe our implementation of distributed threads, PARSEC (Parallel software environment for workstation cluster). Using user-level threads, PARSEC implements thread migration and thread scheduling based upon the page access information. We also measure the performance of some applications with these thread scheduling methods. These measurements indicate that the thread scheduling methods greatly reduce page-thrashing and improve total system performance.
尽管分布式共享内存(DSM)上的分布式线程为分布式计算机系统提供了一种简单的编程模型,但由于软件DSM系统容易出现页面抖动,因此使用它们构建高性能系统并不容易。减少页面抖动的一种方法是利用线程迁移,这会导致DSM上的页面访问模式发生变化。本文提出了一种基于页面访问信息的线程调度方法,并讨论了一种评估页面访问信息的分析模型。然后,我们描述了分布式线程的实现,PARSEC(工作站集群并行软件环境)。使用用户级线程,PARSEC基于页面访问信息实现线程迁移和线程调度。我们还使用这些线程调度方法测量一些应用程序的性能。这些测量表明,线程调度方法大大减少了页面抖动,提高了系统的总体性能。
{"title":"Distributed-thread scheduling methods for reducing page-thrashing","authors":"Yoshiaki Sudo, Shigeo Suzuki, Shigeki Shibayama","doi":"10.1109/HPDC.1997.626444","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626444","url":null,"abstract":"Although distributed threads on distributed shared memory (DSM) provide an easy programming model for distributed computer systems, it is not easy to build a high performance system with them, because a software DSM system is prone to page-thrashing. One way to reduce page-thrashing is to utilize thread migration, which leads to changes in page access patterns on DSM. In this paper, we propose thread scheduling methods based upon page access information and discuss an analytical model for evaluating this information. Then, we describe our implementation of distributed threads, PARSEC (Parallel software environment for workstation cluster). Using user-level threads, PARSEC implements thread migration and thread scheduling based upon the page access information. We also measure the performance of some applications with these thread scheduling methods. These measurements indicate that the thread scheduling methods greatly reduce page-thrashing and improve total system performance.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128619381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A secure communications infrastructure for high-performance distributed computing 用于高性能分布式计算的安全通信基础设施
Ian T Foster, N. Karonis, C. Kesselman, G. Koenig, S. Tuecke
Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentiality of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. We address these requirements via a security-enhanced version of the Nexus communication library, which we use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication allowing the programmer to make fine-grained security/performance tradeoffs. We present performance results that quantify the performance of our infrastructure.
使用高速网络连接地理上分布的超级计算机、数据库和科学仪器的应用程序可以在开放网络上运行并访问有价值的资源。因此,它们可能需要确保通信的完整性和机密性以及对用户和资源进行身份验证的机制。为传统客户机-服务器应用程序开发的安全解决方案不直接支持这些应用程序中遇到的程序结构、编程工具和性能需求。我们通过Nexus通信库的安全增强版本来解决这些需求,我们使用它来提供并行库和语言的安全版本,包括消息传递接口。这些工具可以很好地控制应用什么、在哪里以及何时应用安全机制。特别是,单个应用程序可以混合使用安全和非安全通信,从而允许程序员进行细粒度的安全性/性能权衡。我们展示了量化基础设施性能的性能结果。
{"title":"A secure communications infrastructure for high-performance distributed computing","authors":"Ian T Foster, N. Karonis, C. Kesselman, G. Koenig, S. Tuecke","doi":"10.1109/HPDC.1997.622369","DOIUrl":"https://doi.org/10.1109/HPDC.1997.622369","url":null,"abstract":"Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentiality of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. We address these requirements via a security-enhanced version of the Nexus communication library, which we use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication allowing the programmer to make fine-grained security/performance tradeoffs. We present performance results that quantify the performance of our infrastructure.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125790070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Adaptive cache invalidation methods in mobile environments 移动环境中的自适应缓存失效方法
Qinglong Hu, Lee
Caching of frequently accessed data items can reduce the bandwidth requirement in a mobile wireless computing environment. Periodically broadcast of invalidation reports is an efficient cache invalidation strategy. However, this strategy is severely affected by the disconnection and mobility of the clients. In this paper, we present two adaptive cache invalidation report methods, in which the server broadcasts different invalidation reports according to the update and query rates/patterns and client disconnection time while spending little uplink cost. Simulation results show that the adaptive invalidation methods are efficient in improving mobile caching and reducing the uplink and downlink costs without degrading the system throughput.
缓存频繁访问的数据项可以减少移动无线计算环境中的带宽需求。定期广播失效报告是一种有效的缓存失效策略。然而,这种策略受到客户端断开和移动的严重影响。在本文中,我们提出了两种自适应缓存失效报告方法,其中服务器在花费很少的上行成本的情况下,根据更新和查询速率/模式以及客户端断开时间广播不同的失效报告。仿真结果表明,自适应失效方法在不降低系统吞吐量的前提下,有效地改善了移动缓存,降低了上行和下行链路开销。
{"title":"Adaptive cache invalidation methods in mobile environments","authors":"Qinglong Hu, Lee","doi":"10.1109/HPDC.1997.626428","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626428","url":null,"abstract":"Caching of frequently accessed data items can reduce the bandwidth requirement in a mobile wireless computing environment. Periodically broadcast of invalidation reports is an efficient cache invalidation strategy. However, this strategy is severely affected by the disconnection and mobility of the clients. In this paper, we present two adaptive cache invalidation report methods, in which the server broadcasts different invalidation reports according to the update and query rates/patterns and client disconnection time while spending little uplink cost. Simulation results show that the adaptive invalidation methods are efficient in improving mobile caching and reducing the uplink and downlink costs without degrading the system throughput.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121438599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Distributed service paradigm for remote video retrieval request 面向远程视频检索请求的分布式服务范式
Y. Won, J. Srivastava
The per service cost have been serious impediment to wide spread usage of on-line digital continuous media service, especially in the entertainment arena. Although handling the continuous media may be achievable due to the technology advances in past few years, its competitiveness in the market with the existing service type such as video rental is still in question. In this paper, we propose a service paradigm for continuous media delivery in a distributed infrastructure in an effort to reduce the resource requirement to support a set of service requests. The storage resource and network resource to support a set of requests should be properly quantified to a uniform metric to measure the efficiency of the service schedule. We developed a cost model which maps the given service schedule to a quantity. The proposed cost model is used to capture the amortized resource requirement of the schedule and thus to measure the efficiency of the schedule. We develop a scheduling algorithm which strategically replicates the requested continuous media files at the various intermediate storages.
每项服务的成本严重阻碍了在线数字连续媒体服务的广泛使用,特别是在娱乐领域。虽然由于过去几年的技术进步,处理连续媒体可能是可以实现的,但它在市场上与现有的服务类型(如视频租赁)的竞争力仍然存在问题。在本文中,我们为分布式基础设施中的持续媒体交付提出了一种服务范式,以减少支持一组服务请求的资源需求。支持一组请求的存储资源和网络资源应该适当地量化为统一的度量,以衡量服务调度的效率。我们开发了一个成本模型,将给定的服务计划映射到一个数量。提出的成本模型用于捕获计划的平摊资源需求,从而衡量计划的效率。我们开发了一种调度算法,可以有策略地在不同的中间存储上复制所需的连续媒体文件。
{"title":"Distributed service paradigm for remote video retrieval request","authors":"Y. Won, J. Srivastava","doi":"10.1109/HPDC.1997.626401","DOIUrl":"https://doi.org/10.1109/HPDC.1997.626401","url":null,"abstract":"The per service cost have been serious impediment to wide spread usage of on-line digital continuous media service, especially in the entertainment arena. Although handling the continuous media may be achievable due to the technology advances in past few years, its competitiveness in the market with the existing service type such as video rental is still in question. In this paper, we propose a service paradigm for continuous media delivery in a distributed infrastructure in an effort to reduce the resource requirement to support a set of service requests. The storage resource and network resource to support a set of requests should be properly quantified to a uniform metric to measure the efficiency of the service schedule. We developed a cost model which maps the given service schedule to a quantity. The proposed cost model is used to capture the amortized resource requirement of the schedule and thus to measure the efficiency of the schedule. We develop a scheduling algorithm which strategically replicates the requested continuous media files at the various intermediate storages.","PeriodicalId":243171,"journal":{"name":"Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132408951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1