J. Parker, T. Cwik, R. Ferraro, P. Liewer, P. Lyster, J. Patterson
{"title":"Helmholtz Finite Elements Performance On Mark III and Intel iPSC/860 Hypercubes","authors":"J. Parker, T. Cwik, R. Ferraro, P. Liewer, P. Lyster, J. Patterson","doi":"10.1109/DMCC.1991.633158","DOIUrl":null,"url":null,"abstract":"The large distributed memory capacities of hypercube computers are exploited by a finite element application which computes the scattered electromagetic field from heterogeneous objects with size large compared to a wavelength. Such problems scale well with hypercube dimension fo r large objects: by using the Recursive Inertial Partitioning algorithm and an iterative solver, the work done by each processor is nearly equal and communication overhead for the system set-up and solution is low. The application has been integrated into a user-friendly eirvironment on a graphics workstation in a local area network including hypercube host machines. Users need never know their solutions are obtained via a parallel computer. Scaling is shown by computing solutions for a series of models which double the number of variables for each increment of hypercube dimension. Timings are compared for the JPLICaltech Mark IIIfp Hypercube and the Intel iPSCI860 hypercube. Acceptable quality of solutions is obtained for object domains of hundreds of square wavelengths and resulting sparse matrix systems with order of 100,000 complex unknowns.","PeriodicalId":313314,"journal":{"name":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Sixth Distributed Memory Computing Conference, 1991. Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DMCC.1991.633158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The large distributed memory capacities of hypercube computers are exploited by a finite element application which computes the scattered electromagetic field from heterogeneous objects with size large compared to a wavelength. Such problems scale well with hypercube dimension fo r large objects: by using the Recursive Inertial Partitioning algorithm and an iterative solver, the work done by each processor is nearly equal and communication overhead for the system set-up and solution is low. The application has been integrated into a user-friendly eirvironment on a graphics workstation in a local area network including hypercube host machines. Users need never know their solutions are obtained via a parallel computer. Scaling is shown by computing solutions for a series of models which double the number of variables for each increment of hypercube dimension. Timings are compared for the JPLICaltech Mark IIIfp Hypercube and the Intel iPSCI860 hypercube. Acceptable quality of solutions is obtained for object domains of hundreds of square wavelengths and resulting sparse matrix systems with order of 100,000 complex unknowns.