{"title":"超立方体的内在并行多尺度算法","authors":"P. Frederickson, O. McBryan","doi":"10.1145/63047.63131","DOIUrl":null,"url":null,"abstract":"Most algorithms implemented on parallel computers have been optimal serial algorithms, slightly modified or parallelized. An exciting possibility is the search for intrinsically parallel algorithms. These are algorithms which do not have a sensible serial equivalent — any serial equivalent is so inefficient as to be of little use.\nWe describe a multiscale algorithm for the solution of PDE systems that is designed specifically for massively parallel supercomputers. Unlike conventional multigrid algorithms, the new algorithm utilizes the same number of processors at all times. Convergence rates are much faster than for standard multigrid methods — the solution error decreases by up to three digits per iteration. The basic idea is to solve many coarse scale problems simultaneously, combining the results in an optimal way to provide an improved fine scale solution.\nOn massively parallel machines the improved convergence rate is attained at no extra computational cost since processors that would otherwise be sitting idle are utilized to provide the better convergence. Furthermore the algorithm is ideally suited to SIMD computers as well as MIMD computers. On serial machines the algorithm is much slower than standard multigrid because of the extra time spent on multiple coarse scales, though in certain cases the improved convergence rate may justify this — primarily in cases where other methods do not converge. The algorithm provides an extremely fast solution of various standard elliptic equations on machines such as the 65,536 processor Connection Machine, and uses only &Ogr; (log(N)) parallel machine instructions to solve such equations. The discovery of this algorithm was motivated entirely by new hardware. It was a surprise to the authors to find that developments in computer architecture might lead to new mathematics. Undoubtedly further intrinsically parallel algorithms await discovery.","PeriodicalId":299435,"journal":{"name":"Conference on Hypercube Concurrent Computers and Applications","volume":"196 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Intrinsically parallel multiscale algorithms for hypercubes\",\"authors\":\"P. Frederickson, O. McBryan\",\"doi\":\"10.1145/63047.63131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most algorithms implemented on parallel computers have been optimal serial algorithms, slightly modified or parallelized. An exciting possibility is the search for intrinsically parallel algorithms. These are algorithms which do not have a sensible serial equivalent — any serial equivalent is so inefficient as to be of little use.\\nWe describe a multiscale algorithm for the solution of PDE systems that is designed specifically for massively parallel supercomputers. Unlike conventional multigrid algorithms, the new algorithm utilizes the same number of processors at all times. Convergence rates are much faster than for standard multigrid methods — the solution error decreases by up to three digits per iteration. The basic idea is to solve many coarse scale problems simultaneously, combining the results in an optimal way to provide an improved fine scale solution.\\nOn massively parallel machines the improved convergence rate is attained at no extra computational cost since processors that would otherwise be sitting idle are utilized to provide the better convergence. Furthermore the algorithm is ideally suited to SIMD computers as well as MIMD computers. On serial machines the algorithm is much slower than standard multigrid because of the extra time spent on multiple coarse scales, though in certain cases the improved convergence rate may justify this — primarily in cases where other methods do not converge. The algorithm provides an extremely fast solution of various standard elliptic equations on machines such as the 65,536 processor Connection Machine, and uses only &Ogr; (log(N)) parallel machine instructions to solve such equations. The discovery of this algorithm was motivated entirely by new hardware. It was a surprise to the authors to find that developments in computer architecture might lead to new mathematics. Undoubtedly further intrinsically parallel algorithms await discovery.\",\"PeriodicalId\":299435,\"journal\":{\"name\":\"Conference on Hypercube Concurrent Computers and Applications\",\"volume\":\"196 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1989-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Conference on Hypercube Concurrent Computers and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/63047.63131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Hypercube Concurrent Computers and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/63047.63131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Intrinsically parallel multiscale algorithms for hypercubes
Most algorithms implemented on parallel computers have been optimal serial algorithms, slightly modified or parallelized. An exciting possibility is the search for intrinsically parallel algorithms. These are algorithms which do not have a sensible serial equivalent — any serial equivalent is so inefficient as to be of little use.
We describe a multiscale algorithm for the solution of PDE systems that is designed specifically for massively parallel supercomputers. Unlike conventional multigrid algorithms, the new algorithm utilizes the same number of processors at all times. Convergence rates are much faster than for standard multigrid methods — the solution error decreases by up to three digits per iteration. The basic idea is to solve many coarse scale problems simultaneously, combining the results in an optimal way to provide an improved fine scale solution.
On massively parallel machines the improved convergence rate is attained at no extra computational cost since processors that would otherwise be sitting idle are utilized to provide the better convergence. Furthermore the algorithm is ideally suited to SIMD computers as well as MIMD computers. On serial machines the algorithm is much slower than standard multigrid because of the extra time spent on multiple coarse scales, though in certain cases the improved convergence rate may justify this — primarily in cases where other methods do not converge. The algorithm provides an extremely fast solution of various standard elliptic equations on machines such as the 65,536 processor Connection Machine, and uses only &Ogr; (log(N)) parallel machine instructions to solve such equations. The discovery of this algorithm was motivated entirely by new hardware. It was a surprise to the authors to find that developments in computer architecture might lead to new mathematics. Undoubtedly further intrinsically parallel algorithms await discovery.