首页 > 最新文献

International Workshop on Storage Network Architecture and Parallel I/Os最新文献

英文 中文
Scheduling with QoS in parallel I/O systems 并行I/O系统中的QoS调度
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162629
Ajay Gulati, P. Varman
Parallel I/O architectures are increasingly deployed for high performance computing and in shared data centers. In these environments it is desirable to provide QoS-based allocation of disk bandwidth to different applications sharing the I/O system. In this paper, we introduce a model of disk bandwidth allocation, and provide efficient scheduling algorithms to assign the bandwidth among the concurrent applications.
并行I/O体系结构越来越多地用于高性能计算和共享数据中心。在这些环境中,为共享I/O系统的不同应用程序提供基于qos的磁盘带宽分配是可取的。本文提出了一种磁盘带宽分配模型,并给出了在并发应用间分配带宽的有效调度算法。
{"title":"Scheduling with QoS in parallel I/O systems","authors":"Ajay Gulati, P. Varman","doi":"10.1145/1162628.1162629","DOIUrl":"https://doi.org/10.1145/1162628.1162629","url":null,"abstract":"Parallel I/O architectures are increasingly deployed for high performance computing and in shared data centers. In these environments it is desirable to provide QoS-based allocation of disk bandwidth to different applications sharing the I/O system. In this paper, we introduce a model of disk bandwidth allocation, and provide efficient scheduling algorithms to assign the bandwidth among the concurrent applications.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130845768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A parallel out-of-core computing system using PVFS for Linux clusters 在Linux集群中使用PVFS的并行核外计算系统
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162633
Jianqi Tang, Binxing Fang, Mingzeng Hu, Hongli Zhang
Cluster systems become a new and popular approach to parallel computing. More and more scientists and engineers use clusters to solve problems with large sets of data for its high processing power, low price and good scalability. Since the traditional out-of-core programs are difficult to write and the virtual memory system does not perform well, we develop a parallel out-of-core computing system using PVFS named POCCS. POCCS provides convenient interface to write out-of-core codes and the global view of the out-of-core data. The software architecture, data storage model and system implementation are described in this paper. The experimental results show that POCCS extends the problem sizes that can be solved and the performance of POCCS is better than the virtual memory system while the data set is large.
集群系统成为一种新的、流行的并行计算方法。越来越多的科学家和工程师使用集群来解决大数据集的问题,因为它具有高处理能力、低价格和良好的可扩展性。针对传统的外核程序难以编写和虚拟内存系统性能不佳的问题,我们利用PVFS开发了一个并行的外核计算系统,命名为POCCS。POCCS为编写核外代码和核外数据的全局视图提供了方便的接口。介绍了该系统的软件体系结构、数据存储模型和系统实现。实验结果表明,POCCS扩展了可求解问题的规模,在数据集较大的情况下,POCCS的性能优于虚拟内存系统。
{"title":"A parallel out-of-core computing system using PVFS for Linux clusters","authors":"Jianqi Tang, Binxing Fang, Mingzeng Hu, Hongli Zhang","doi":"10.1145/1162628.1162633","DOIUrl":"https://doi.org/10.1145/1162628.1162633","url":null,"abstract":"Cluster systems become a new and popular approach to parallel computing. More and more scientists and engineers use clusters to solve problems with large sets of data for its high processing power, low price and good scalability. Since the traditional out-of-core programs are difficult to write and the virtual memory system does not perform well, we develop a parallel out-of-core computing system using PVFS named POCCS. POCCS provides convenient interface to write out-of-core codes and the global view of the out-of-core data. The software architecture, data storage model and system implementation are described in this paper. The experimental results show that POCCS extends the problem sizes that can be solved and the performance of POCCS is better than the virtual memory system while the data set is large.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126549983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A case for virtualized arrays of RAID RAID虚拟化阵列用例
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162630
A. Brinkmann, Kay Salzwedel, Mario Vodisek
Redundant arrays of independent disks, also called RAID arrays, have gained a wide popularity in the last twenty years. Most of the disks used in the server market are currently based on RAID technology. The primary reason for introducing RAID technology in 1988 has been the fact that large disk systems have become much slower and more expensive than the connection of a large number of inexpensive disks and the use of them as an array.The times seem to repeat themselves. Today, large scale RAID arrays have become incredible big and expensive. It seems that it makes sense to replace them by a collection of smaller and inexpensive arrays of JBODs or mid-ranged RAID arrays. In this paper we will show that combining these systems with state-of-the-art virtualization technology can lead to a system that is faster and less expensive than an enterprise storage system, while being as easy to manage and as reliable. Therefore we will outline the most important features of storage management and compare there realization in enterprise class storage systems and in current and future virtualization environments.
独立磁盘的冗余阵列,也称为RAID阵列,在过去的二十年中得到了广泛的普及。目前,服务器市场上使用的大多数磁盘都是基于RAID技术。1988年引入RAID技术的主要原因是,与连接大量廉价磁盘并将它们作为阵列使用相比,大型磁盘系统已经变得非常缓慢和昂贵。时代似乎在重复。今天,大规模的RAID阵列已经变得非常庞大和昂贵。用一组更小、更便宜的jbod阵列或中等级别的RAID阵列来取代它们似乎是有意义的。在本文中,我们将展示将这些系统与最先进的虚拟化技术相结合,可以产生比企业存储系统更快、更便宜的系统,同时也同样易于管理和可靠。因此,我们将概述存储管理的最重要特性,并比较它们在企业级存储系统以及当前和未来虚拟化环境中的实现。
{"title":"A case for virtualized arrays of RAID","authors":"A. Brinkmann, Kay Salzwedel, Mario Vodisek","doi":"10.1145/1162628.1162630","DOIUrl":"https://doi.org/10.1145/1162628.1162630","url":null,"abstract":"Redundant arrays of independent disks, also called RAID arrays, have gained a wide popularity in the last twenty years. Most of the disks used in the server market are currently based on RAID technology. The primary reason for introducing RAID technology in 1988 has been the fact that large disk systems have become much slower and more expensive than the connection of a large number of inexpensive disks and the use of them as an array.The times seem to repeat themselves. Today, large scale RAID arrays have become incredible big and expensive. It seems that it makes sense to replace them by a collection of smaller and inexpensive arrays of JBODs or mid-ranged RAID arrays. In this paper we will show that combining these systems with state-of-the-art virtualization technology can lead to a system that is faster and less expensive than an enterprise storage system, while being as easy to manage and as reliable. Therefore we will outline the most important features of storage management and compare there realization in enterprise class storage systems and in current and future virtualization environments.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133624094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A performance-oriented energy efficient file system 以性能为导向的节能文件系统
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162636
Dong Li, Jun Wang
Current general-purpose file systems emphasize the consistency of standard file system semantics and performance issues rather than energy-efficiency. In this paper we present a novel energy efficient file system called EEFS to effectively both reduce energy consumption and improve performance by separately managing those small-sized files with a good group access locality. To keep compatibility, EEFS consists of two working modules: a normal Unix-like File System (UFS) and a group-structured file system (GFS) that are transparent to user applications. EEFS contributes a new grouping policy that can construct files groups with group access locality and be used to migrate files between UFS and GFS. Comprehensive trace-driven simulation experiments show that EEFS achieves a great energy savings by up to 50% compared to that of the general-purpose UNIX file system, and simultaneously delivers a better file I/O performance by up to 21%.
当前的通用文件系统强调标准文件系统语义的一致性和性能问题,而不是能源效率。本文提出了一种新的节能文件系统EEFS,通过对具有良好组访问局部性的小文件进行单独管理,有效地降低了能耗,提高了性能。为了保持兼容性,EEFS由两个工作模块组成:一个普通的类unix文件系统(UFS)和一个对用户应用程序透明的组结构文件系统(GFS)。EEFS提供了一种新的分组策略,该策略可以构建具有组访问局部性的文件组,并用于在UFS和GFS之间迁移文件。综合跟踪驱动的仿真实验表明,与通用UNIX文件系统相比,EEFS实现了高达50%的能源节约,同时提供了高达21%的更好的文件I/O性能。
{"title":"A performance-oriented energy efficient file system","authors":"Dong Li, Jun Wang","doi":"10.1145/1162628.1162636","DOIUrl":"https://doi.org/10.1145/1162628.1162636","url":null,"abstract":"Current general-purpose file systems emphasize the consistency of standard file system semantics and performance issues rather than energy-efficiency. In this paper we present a novel energy efficient file system called EEFS to effectively both reduce energy consumption and improve performance by separately managing those small-sized files with a good group access locality. To keep compatibility, EEFS consists of two working modules: a normal Unix-like File System (UFS) and a group-structured file system (GFS) that are transparent to user applications. EEFS contributes a new grouping policy that can construct files groups with group access locality and be used to migrate files between UFS and GFS. Comprehensive trace-driven simulation experiments show that EEFS achieves a great energy savings by up to 50% compared to that of the general-purpose UNIX file system, and simultaneously delivers a better file I/O performance by up to 21%.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131733534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Virtualization with prefetching abilities based on iSCSI 基于iSCSI的虚拟化预取功能
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162634
Peter Bleckmann, Gunnar Schomaker, A. Slowik
The Internet-SCSI protocol [iSCSI] allows a client to interact with a remote SCSI-capable target by means of block-oriented commands encapsulated within TCP/IP packets. Thereby, iSCSI greatly simplifies storage virtualization, since clients can access storage in a unified manner, no matter whether the I/O-path is short or long distance. Intermediate devices located on the path between a client and a target can easily intercept iSCSI sessions and rewrite packets for the sake of load balancing, prefetching, or redundancy, to mention just a few beneficial applications. Within this paper we describe the design and implementation of such an iSCSI capable intermediate device that deploys prefetching strategies in combination with redundant disks to reduce average I/O-latency. Depending on its location within the network, this virtualization and prefetching device can hide wide area access latency and reduce network contention targeting remote SCSI-devices to a large extent.
Internet-SCSI协议[iSCSI]允许客户端通过封装在TCP/IP包中的面向块的命令与远程scsi目标进行交互。因此,iSCSI极大地简化了存储虚拟化,无论I/ o路径是短距离还是长距离,客户端都可以统一地访问存储。位于客户端和目标之间路径上的中间设备可以很容易地拦截iSCSI会话并重写数据包,以实现负载平衡、预取或冗余,这里仅举几个有益的应用程序。在本文中,我们描述了这样一个具有iSCSI功能的中间设备的设计和实现,该设备将预取策略与冗余磁盘相结合,以减少平均I/ o延迟。根据其在网络中的位置,这种虚拟化和预取设备可以隐藏广域访问延迟,并在很大程度上减少针对远程scsi设备的网络争用。
{"title":"Virtualization with prefetching abilities based on iSCSI","authors":"Peter Bleckmann, Gunnar Schomaker, A. Slowik","doi":"10.1145/1162628.1162634","DOIUrl":"https://doi.org/10.1145/1162628.1162634","url":null,"abstract":"The Internet-SCSI protocol [iSCSI] allows a client to interact with a remote SCSI-capable target by means of block-oriented commands encapsulated within TCP/IP packets. Thereby, iSCSI greatly simplifies storage virtualization, since clients can access storage in a unified manner, no matter whether the I/O-path is short or long distance. Intermediate devices located on the path between a client and a target can easily intercept iSCSI sessions and rewrite packets for the sake of load balancing, prefetching, or redundancy, to mention just a few beneficial applications. Within this paper we describe the design and implementation of such an iSCSI capable intermediate device that deploys prefetching strategies in combination with redundant disks to reduce average I/O-latency. Depending on its location within the network, this virtualization and prefetching device can hide wide area access latency and reduce network contention targeting remote SCSI-devices to a large extent.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123229548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Analysis of iSCSI target software iSCSI目标软件分析
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162632
Fujita Tomonori, Ogawara Masanori
We analyzed the design and performance of iSCSI storage systems, built into general purpose operating systems. Our experiments revealed that a storage system that uses specialized functions, in conjunction with the modified operating system, outperforms a storage system that only uses the standard functions provided by the operating system. However, our results also show that careful design enables the latter approach to provide a comparable performance to that of the former, in common workloads.
我们分析了通用操作系统中内置的iSCSI存储系统的设计和性能。我们的实验表明,使用专用功能并结合修改后的操作系统的存储系统比只使用操作系统提供的标准功能的存储系统性能更好。然而,我们的结果还表明,在常见的工作负载中,经过精心设计的后一种方法可以提供与前一种方法相当的性能。
{"title":"Analysis of iSCSI target software","authors":"Fujita Tomonori, Ogawara Masanori","doi":"10.1145/1162628.1162632","DOIUrl":"https://doi.org/10.1145/1162628.1162632","url":null,"abstract":"We analyzed the design and performance of iSCSI storage systems, built into general purpose operating systems. Our experiments revealed that a storage system that uses specialized functions, in conjunction with the modified operating system, outperforms a storage system that only uses the standard functions provided by the operating system. However, our results also show that careful design enables the latter approach to provide a comparable performance to that of the former, in common workloads.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125640532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An overview on MEMS-based storage, its research issues and open problems 综述了mems存储的研究现状及有待解决的问题
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162635
Yifeng Zhu
A disruptive new storage technology based on Microelectromechanical Systems (MEMS) is emerging as an exciting complement to the memory hierarchy. This study reviews and summarizes the current research about integrating this new technology into computer systems from four levels: device, architecture, system and application. In addition, several potential research issues in MEMS storage are identified, including (1) exploiting idle read/write tips to perform prefetching, (2) reversal access to save seek time, (3) fault-tolerance design inside storage devices, (4) power consumption modeling, (5) reevaluation of existing disk-type I/O optimization algorithms.
一种基于微机电系统(MEMS)的颠覆性新存储技术正在兴起,成为存储器层次结构的一个令人兴奋的补充。本文从设备、体系结构、系统和应用四个层面,对当前关于将该新技术集成到计算机系统中的研究进行了综述和总结。此外,本文还指出了MEMS存储的几个潜在研究问题,包括:(1)利用空闲读/写提示执行预取,(2)反转访问以节省寻道时间,(3)存储设备内部的容错设计,(4)功耗建模,(5)重新评估现有磁盘类型I/O优化算法。
{"title":"An overview on MEMS-based storage, its research issues and open problems","authors":"Yifeng Zhu","doi":"10.1145/1162628.1162635","DOIUrl":"https://doi.org/10.1145/1162628.1162635","url":null,"abstract":"A disruptive new storage technology based on Microelectromechanical Systems (MEMS) is emerging as an exciting complement to the memory hierarchy. This study reviews and summarizes the current research about integrating this new technology into computer systems from four levels: device, architecture, system and application. In addition, several potential research issues in MEMS storage are identified, including (1) exploiting idle read/write tips to perform prefetching, (2) reversal access to save seek time, (3) fault-tolerance design inside storage devices, (4) power consumption modeling, (5) reevaluation of existing disk-type I/O optimization algorithms.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130302530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
RAMS: a RDMA-enabled I/O cache architecture for clustered network servers ram:用于集群网络服务器的支持rdma的I/O缓存架构
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162637
Peng Gu, Jun Wang
Previous studies show that intra-cluster communication easily becomes a major performance bottleneck for a wide range of small write-sharing workloads especially read-only workloads in modern clustered network servers. A Remote Direct Memory Access (RDMA) technique has been recommended by many researchers to address the problem but how to well utilize RDMA is still in its infancy. This paper proposed a novel solution to boost intra-cluster communication performance by creatively developing a RDMA-enabled collaborative I/O cache Architecture called RAMS, which aims to smartly cache the most recently used RDMA-based intra-cluster data transfer processes for future reuse. RAMS makes two major contributions to facilitate the RDMA deployment: 1) design a novel RDMA-based user-level buffer cache architecture to cache both intra-cluster transferred data and data references; 2) develop three propagated update protocols to attack a RDMA read failure problem. Comprehensive experimental results show that three proposed new update protocols of RAMS can slash the RDMA read failure rate by 75%, and indirectly boost the system throughput by more than 50%, compared with a baseline system using Remote Procedure Call (RPC).
以往的研究表明,在现代集群网络服务器中,集群内通信很容易成为各种小型写共享工作负载特别是只读工作负载的主要性能瓶颈。远程直接内存访问(RDMA)技术已被许多研究人员推荐来解决这个问题,但如何很好地利用RDMA仍处于起步阶段。本文提出了一种新颖的解决方案,通过创造性地开发一种基于rdma的协同I/O缓存架构RAMS来提高集群内通信性能,该架构旨在智能地缓存最近使用的基于rdma的集群内数据传输过程,以备将来重用。RAMS在促进RDMA部署方面做出了两大贡献:1)设计了一种新的基于RDMA的用户级缓冲缓存架构来缓存集群内传输的数据和数据引用;2)开发三种传播更新协议来解决RDMA读取失败问题。综合实验结果表明,与使用远程过程调用(Remote Procedure Call, RPC)的基准系统相比,提出的三种新的ram更新协议可将RDMA读故障率降低75%,间接提高系统吞吐量50%以上。
{"title":"RAMS: a RDMA-enabled I/O cache architecture for clustered network servers","authors":"Peng Gu, Jun Wang","doi":"10.1145/1162628.1162637","DOIUrl":"https://doi.org/10.1145/1162628.1162637","url":null,"abstract":"Previous studies show that intra-cluster communication easily becomes a major performance bottleneck for a wide range of small write-sharing workloads especially read-only workloads in modern clustered network servers. A Remote Direct Memory Access (RDMA) technique has been recommended by many researchers to address the problem but how to well utilize RDMA is still in its infancy. This paper proposed a novel solution to boost intra-cluster communication performance by creatively developing a RDMA-enabled collaborative I/O cache Architecture called RAMS, which aims to smartly cache the most recently used RDMA-based intra-cluster data transfer processes for future reuse. RAMS makes two major contributions to facilitate the RDMA deployment: 1) design a novel RDMA-based user-level buffer cache architecture to cache both intra-cluster transferred data and data references; 2) develop three propagated update protocols to attack a RDMA read failure problem. Comprehensive experimental results show that three proposed new update protocols of RAMS can slash the RDMA read failure rate by 75%, and indirectly boost the system throughput by more than 50%, compared with a baseline system using Remote Procedure Call (RPC).","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131654497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Increasing the capacity of RAID5 by online gradual assimilation 通过在线逐步同化增加RAID5的容量
Pub Date : 2004-09-30 DOI: 10.1145/1162628.1162631
J. González, Toni Cortes
Disk arrays level 5 (RAID5) are very commonly used in many environments. This kind of arrays has the advantage of parallel access, fault tolerance and little waste of space for redundancy issues. Nevertheless, this kind of storage architecture has a problem when more disks have to be added to the array. Currently, there is no simple, efficient and on-line mechanism to add any number of new disks (not replacing them), and this is an important drawback in systems that cannot be stopped when the storage capacity needs to be increased. We propose an algorithm to add N disks to an array while it continues running. The proposed algorithm for a gradual assimilation of disks has three major advantages: it has an easily controlled overhead, it allows the user to benefit from the higher parallelism achieved by the part of the array that has already been converted, and finally, it can be used in 7/24 systems.
磁盘阵列级别5 (RAID5)在许多环境中都非常常用。这种阵列具有并行存取、容错性好、不浪费冗余空间等优点。然而,当需要向阵列中添加更多磁盘时,这种存储架构存在问题。目前,还没有一种简单、有效和在线的机制来添加任意数量的新磁盘(而不是替换它们),这是系统在需要增加存储容量时无法停止的一个重要缺点。我们提出了一种算法,可以在数组继续运行时向其添加N个磁盘。所提出的逐步同化磁盘的算法有三个主要优点:它具有易于控制的开销,它允许用户从已经转换的部分阵列实现的更高并行性中受益,最后,它可以在7/24系统中使用。
{"title":"Increasing the capacity of RAID5 by online gradual assimilation","authors":"J. González, Toni Cortes","doi":"10.1145/1162628.1162631","DOIUrl":"https://doi.org/10.1145/1162628.1162631","url":null,"abstract":"Disk arrays level 5 (RAID5) are very commonly used in many environments. This kind of arrays has the advantage of parallel access, fault tolerance and little waste of space for redundancy issues. Nevertheless, this kind of storage architecture has a problem when more disks have to be added to the array. Currently, there is no simple, efficient and on-line mechanism to add any number of new disks (not replacing them), and this is an important drawback in systems that cannot be stopped when the storage capacity needs to be increased. We propose an algorithm to add N disks to an array while it continues running. The proposed algorithm for a gradual assimilation of disks has three major advantages: it has an easily controlled overhead, it allows the user to benefit from the higher parallelism achieved by the part of the array that has already been converted, and finally, it can be used in 7/24 systems.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131854257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Demotion-based exclusive caching through demote buffering: design and evaluations over different networks 通过降级缓冲的基于降级的独占缓存:不同网络上的设计和评估
Pub Date : 2003-09-28 DOI: 10.1145/1162618.1162627
Jiesheng Wu, P. Wyckoff, D. Panda
Multi-level buffer cache architecture has been widely deployed in today's multiple-tier computing environments. However, caches in different levels are inclusive. To make better use of these caches and to achieve the expected performance commensurate to the aggregate cache size, exclusive caching has been proposed. Demotion-based exclusive caching [1] introduces a DEMOTE operation to transfer blocks discarded by a upper level cache to a lower level cache. In this paper, we propose a DEMOTE buffering mechanism over storage networks to reduce the visible costs of DEMOTE operations and provide more flexibility for optimizations. We evaluate the performance of DEMOTE buffering using simulations across both synthetic and real-life workloads on three different networks and protocol layers (TCP/IP on Fast Ethernet, IBNice on InfiniBand, and VAPI on InfiniBand). Our results show that DEMOTE buffering can effectively hide demotion costs. A maximum speedup of 1.4x over the original DEMOTE approach is achieved for some workloads. Speedups in the range of 1.08--1.15x are achieved for two real-life workloads. The vast performance gains results from overlapping demotions and other activities, reduced communication operations and high utilization of the network bandwidth.
在当今的多层计算环境中,已广泛部署了多级缓存体系结构。但是,不同级别的缓存是包含在内的。为了更好地利用这些缓存并实现与总缓存大小相称的预期性能,提出了独占缓存。基于降级的独占缓存[1]引入了一个DEMOTE操作,将被上层缓存丢弃的块传输到下层缓存。在本文中,我们提出了一种存储网络上的DEMOTE缓冲机制,以减少DEMOTE操作的可见成本,并为优化提供更大的灵活性。我们在三种不同的网络和协议层(快速以太网上的TCP/IP、InfiniBand上的IBNice和InfiniBand上的VAPI)上使用模拟来评估DEMOTE缓冲的性能。我们的结果表明,降级缓冲可以有效地隐藏降级成本。对于某些工作负载,可以实现比原始DEMOTE方法1.4倍的最大加速。在两个实际工作负载中实现了1.08- 1.15倍的加速。巨大的性能收益来自于重叠的降级和其他活动、减少的通信操作和网络带宽的高利用率。
{"title":"Demotion-based exclusive caching through demote buffering: design and evaluations over different networks","authors":"Jiesheng Wu, P. Wyckoff, D. Panda","doi":"10.1145/1162618.1162627","DOIUrl":"https://doi.org/10.1145/1162618.1162627","url":null,"abstract":"Multi-level buffer cache architecture has been widely deployed in today's multiple-tier computing environments. However, caches in different levels are inclusive. To make better use of these caches and to achieve the expected performance commensurate to the aggregate cache size, exclusive caching has been proposed. Demotion-based exclusive caching [1] introduces a DEMOTE operation to transfer blocks discarded by a upper level cache to a lower level cache. In this paper, we propose a DEMOTE buffering mechanism over storage networks to reduce the visible costs of DEMOTE operations and provide more flexibility for optimizations. We evaluate the performance of DEMOTE buffering using simulations across both synthetic and real-life workloads on three different networks and protocol layers (TCP/IP on Fast Ethernet, IBNice on InfiniBand, and VAPI on InfiniBand). Our results show that DEMOTE buffering can effectively hide demotion costs. A maximum speedup of 1.4x over the original DEMOTE approach is achieved for some workloads. Speedups in the range of 1.08--1.15x are achieved for two real-life workloads. The vast performance gains results from overlapping demotions and other activities, reduced communication operations and high utilization of the network bandwidth.","PeriodicalId":447113,"journal":{"name":"International Workshop on Storage Network Architecture and Parallel I/Os","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123731471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
International Workshop on Storage Network Architecture and Parallel I/Os
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1