{"title":"Architectures with parallel I/O subsystems","authors":"","doi":"10.1109/m-pdt.1995.414860","DOIUrl":null,"url":null,"abstract":"Here are some examples, in approximate chronological order, of massively parallel machines that include a parallel I/O subsystem: 0 Intel iPSC hypercubes: Each hyper-cube node has an extra link that allows an YO processor to hook onto it. Thus, the number of 1/0 processors can grow to the number of hypercube nodes. In the latest version (the iPSC/860), hypercube nodes are based on the i860 microprocessor , whereas 1/0 processors use an 803 86 chip. Each I/O processor has a SCSI bus with one or more disks, and services requests from all hypercube nodes. Requests and data are routed through the node to which the 1/0 processor connects. 0 nCube hypercubes: Like the iPSC, nodes have an e m connection to an YO processor. Each VO processor connects directly to up to eight nodes.' The processors use a proprietary chip design. MasPar: A SIMD machine with up to 16K processors.' A grid and a three-stage router network connect the processors. The router also connects to a special IORAM of up to 1 Gbyte. This allows permutation of the data between the processor array and the I O W. The I O W , in turn, connects to multiple disk arrays via an YO channel. Each disk array is a RAID 3 arrangement with eight data disks and one parity disk. Intel Paragon XP/S: A mesh-suuc-tured machine that allows different configurations of compute nodes and U 0 nodes. Compute nodes are based on the 8 6 0 microprocessor. Typically, the VO nodes are concentrated in one or more rectangular I/O partitions. The Paragon is based on experience with the Touchstone Delta prototype, a 16 x 36 mesh with 5 13 processing nodes and 42 VO nodes (32 with disks and 10 with tape^).^ kSR1: A multiprocessor based on the Allcache memory design, with up to 1,088 custom processors. Each processor can connect to an adapter for external communications. One of the options is the Multiple Channel Disk adapter, which has five SCSI controllers. Each node can have up to 20 disks attached to it, in increments of five. Software configuration allows nodes with VO devices to be used exclusively for VO, or also for computation. Thinking Machines CM-Y: A multi-computer based on a fat-tree network and Sparc nodes with optional vector units. I/O is provided by a scalable disk array, which is implemented as a separate partition of disk-storage nodes4 Each …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Parallel & Distributed Technology: Systems & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/m-pdt.1995.414860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Here are some examples, in approximate chronological order, of massively parallel machines that include a parallel I/O subsystem: 0 Intel iPSC hypercubes: Each hyper-cube node has an extra link that allows an YO processor to hook onto it. Thus, the number of 1/0 processors can grow to the number of hypercube nodes. In the latest version (the iPSC/860), hypercube nodes are based on the i860 microprocessor , whereas 1/0 processors use an 803 86 chip. Each I/O processor has a SCSI bus with one or more disks, and services requests from all hypercube nodes. Requests and data are routed through the node to which the 1/0 processor connects. 0 nCube hypercubes: Like the iPSC, nodes have an e m connection to an YO processor. Each VO processor connects directly to up to eight nodes.' The processors use a proprietary chip design. MasPar: A SIMD machine with up to 16K processors.' A grid and a three-stage router network connect the processors. The router also connects to a special IORAM of up to 1 Gbyte. This allows permutation of the data between the processor array and the I O W. The I O W , in turn, connects to multiple disk arrays via an YO channel. Each disk array is a RAID 3 arrangement with eight data disks and one parity disk. Intel Paragon XP/S: A mesh-suuc-tured machine that allows different configurations of compute nodes and U 0 nodes. Compute nodes are based on the 8 6 0 microprocessor. Typically, the VO nodes are concentrated in one or more rectangular I/O partitions. The Paragon is based on experience with the Touchstone Delta prototype, a 16 x 36 mesh with 5 13 processing nodes and 42 VO nodes (32 with disks and 10 with tape^).^ kSR1: A multiprocessor based on the Allcache memory design, with up to 1,088 custom processors. Each processor can connect to an adapter for external communications. One of the options is the Multiple Channel Disk adapter, which has five SCSI controllers. Each node can have up to 20 disks attached to it, in increments of five. Software configuration allows nodes with VO devices to be used exclusively for VO, or also for computation. Thinking Machines CM-Y: A multi-computer based on a fat-tree network and Sparc nodes with optional vector units. I/O is provided by a scalable disk array, which is implemented as a separate partition of disk-storage nodes4 Each …