{"title":"A projective geometry architecture for scientific computation","authors":"B. Amrutur, Rajeev Joshi, N. Karmarkar","doi":"10.1109/ASAP.1992.218581","DOIUrl":null,"url":null,"abstract":"A large fraction of scientific and engineering computations involve sparse matrices. While dense matrix computations can be parallelized relatively easily, sparse matrices with arbitrary or irregular structure pose a real challenge to designers of highly parallel machines. A recent paper by N.K. Karmarkar (1991) proposed a new parallel architecture for sparse matrix computations based on finite projective geometries. Mathematical structure of these geometries plays an important role in defining the interconnections between the processors and memories in this architecture, and also aids in efficiently solving several difficult problems (such as load balancing, data-routing, memory-access conflicts, etc.) that are encountered in the design of parallel systems. The authors discuss some of the key issues in the system design of such a machine, and show how exploiting the structure of the geometry results in an efficient hardware implementation of the machine. They also present circuit designs and simulation results for key elements of the system: a 200 MHz pipelined memory; a pipelined multiplier based on an adder unit with a delay of 2 ns; and a 500 Mbit/s CMOS input/output buffer.<<ETX>>","PeriodicalId":265438,"journal":{"name":"[1992] Proceedings of the International Conference on Application Specific Array Processors","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"[1992] Proceedings of the International Conference on Application Specific Array Processors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASAP.1992.218581","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
A large fraction of scientific and engineering computations involve sparse matrices. While dense matrix computations can be parallelized relatively easily, sparse matrices with arbitrary or irregular structure pose a real challenge to designers of highly parallel machines. A recent paper by N.K. Karmarkar (1991) proposed a new parallel architecture for sparse matrix computations based on finite projective geometries. Mathematical structure of these geometries plays an important role in defining the interconnections between the processors and memories in this architecture, and also aids in efficiently solving several difficult problems (such as load balancing, data-routing, memory-access conflicts, etc.) that are encountered in the design of parallel systems. The authors discuss some of the key issues in the system design of such a machine, and show how exploiting the structure of the geometry results in an efficient hardware implementation of the machine. They also present circuit designs and simulation results for key elements of the system: a 200 MHz pipelined memory; a pipelined multiplier based on an adder unit with a delay of 2 ns; and a 500 Mbit/s CMOS input/output buffer.<>