首页 > 最新文献

Proceedings Scalable High Performance Computing Conference SHPCC-92.最新文献

英文 中文
Applications of FORALL-formed computations in large scale stochastic dynamic programming forall形式计算在大规模随机动态规划中的应用
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232650
F. Hanson, D. Jarvis, H.H. Xu
Data parallel broadcasting methods have been developed by taking the advantages of the properties of stochastic, nonlinear, continuous-time dynamical systems. The stochastic components include both Gaussian and Poisson random white noise. An example of a grand challenge level application is the resource management problem. The purpose of this paper is to demonstrate that broadcasting can be efficiently performed, if the computational functions are FORALL-formed, i.e. arrays are formed using FORALL-loops. Also, it is predicted that the parallel data vault mass storage method becomes efficient and flexible if the computational functions are FORALL-formed.<>
数据并行广播方法是利用随机、非线性、连续时间动力系统的特性而发展起来的。随机分量包括高斯白噪声和泊松白噪声。大挑战级应用程序的一个例子是资源管理问题。本文的目的是证明,如果计算函数是forall形式的,即使用forall循环形成数组,广播可以有效地执行。同时预测,如果计算函数是forall形式的,那么并行数据库海量存储方法将变得更加高效和灵活。
{"title":"Applications of FORALL-formed computations in large scale stochastic dynamic programming","authors":"F. Hanson, D. Jarvis, H.H. Xu","doi":"10.1109/SHPCC.1992.232650","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232650","url":null,"abstract":"Data parallel broadcasting methods have been developed by taking the advantages of the properties of stochastic, nonlinear, continuous-time dynamical systems. The stochastic components include both Gaussian and Poisson random white noise. An example of a grand challenge level application is the resource management problem. The purpose of this paper is to demonstrate that broadcasting can be efficiently performed, if the computational functions are FORALL-formed, i.e. arrays are formed using FORALL-loops. Also, it is predicted that the parallel data vault mass storage method becomes efficient and flexible if the computational functions are FORALL-formed.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130327470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic mapping and load balancing of pointer-based dynamic data structures on distributed memory machines 分布式内存机器上基于指针的动态数据结构的自动映射和负载平衡
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232634
R. P. Weaver, R. Schnabel
Describes an algorithm for automatically mapping and load balancing unstructured, dynamic data structures on distributed memory machines. The algorithm is intended to be embedded in a compiler for a parallel language (DYNO) for programming unstructured numerical computations. The result is that the mapping and load balancing are transparent to the programmer. The algorithm iterates over two basic steps: (1) It identifies groups of nodes ('pieces') that disproportionately contribute to the number of off-processor edges of the data structure and moves them to processors to which they are better connected. (2) It balances the loads by identifying groups of nodes ('flows') that can moved to adjacent processors without creating new pieces. The initial results are promising, giving good load balancing and a reasonably low number of inter-processor edges.<>
描述用于在分布式内存机器上自动映射和负载平衡非结构化动态数据结构的算法。该算法旨在嵌入到用于编程非结构化数值计算的并行语言(DYNO)编译器中。其结果是映射和负载平衡对程序员是透明的。该算法在两个基本步骤上迭代:(1)识别对数据结构的非处理器边缘数量贡献不成比例的节点组(“片段”),并将它们移动到更好连接的处理器上。(2)它通过识别可以移动到相邻处理器而不创建新块的节点组(“流”)来平衡负载。最初的结果是有希望的,提供了良好的负载平衡和相当低的处理器间边缘数量
{"title":"Automatic mapping and load balancing of pointer-based dynamic data structures on distributed memory machines","authors":"R. P. Weaver, R. Schnabel","doi":"10.1109/SHPCC.1992.232634","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232634","url":null,"abstract":"Describes an algorithm for automatically mapping and load balancing unstructured, dynamic data structures on distributed memory machines. The algorithm is intended to be embedded in a compiler for a parallel language (DYNO) for programming unstructured numerical computations. The result is that the mapping and load balancing are transparent to the programmer. The algorithm iterates over two basic steps: (1) It identifies groups of nodes ('pieces') that disproportionately contribute to the number of off-processor edges of the data structure and moves them to processors to which they are better connected. (2) It balances the loads by identifying groups of nodes ('flows') that can moved to adjacent processors without creating new pieces. The initial results are promising, giving good load balancing and a reasonably low number of inter-processor edges.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134228072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Sparse data representation for data-parallel computation 面向数据并行计算的稀疏数据表示
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232633
A. L. Cheung, A. Reeves
Performance optimization has ben achieved by a transparent parallel sparse data representation in a data-parallel programming environment. In a sparse data representation, only the non-zero data elements of an array are stored and processed. The parallel sparse data representation is designed to efficiently utilize system resources on multicomputer systems for a broad class of problems; the main focus of this work is on the sparse situations that arise in dense data-parallel algorithms rather than the more traditional sparse linear algebra applications. A number of sparse data formats have been considered; one of these formats has been implemented in a high-level data-parallel programming environment called Paragon. Experimental results have been obtained with a distributed-memory multicomputer system.<>
在数据并行编程环境中,通过透明的并行稀疏数据表示实现了性能优化。在稀疏数据表示中,只存储和处理数组的非零数据元素。并行稀疏数据表示是为了在多计算机系统中有效地利用系统资源来解决各种问题而设计的;这项工作的主要焦点是在密集数据并行算法中出现的稀疏情况,而不是更传统的稀疏线性代数应用。已经考虑了许多稀疏数据格式;其中一种格式已经在称为Paragon的高级数据并行编程环境中实现。在分布式存储多机系统上得到了实验结果。
{"title":"Sparse data representation for data-parallel computation","authors":"A. L. Cheung, A. Reeves","doi":"10.1109/SHPCC.1992.232633","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232633","url":null,"abstract":"Performance optimization has ben achieved by a transparent parallel sparse data representation in a data-parallel programming environment. In a sparse data representation, only the non-zero data elements of an array are stored and processed. The parallel sparse data representation is designed to efficiently utilize system resources on multicomputer systems for a broad class of problems; the main focus of this work is on the sparse situations that arise in dense data-parallel algorithms rather than the more traditional sparse linear algebra applications. A number of sparse data formats have been considered; one of these formats has been implemented in a high-level data-parallel programming environment called Paragon. Experimental results have been obtained with a distributed-memory multicomputer system.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131093135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SUPERB support for irregular scientific computations 卓越的支持不规则的科学计算
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232626
P. Brezany, M. Gerndt, V. Sipková, H. Zima
Runtime support for parallelization of scientific programs is needed when some information important for decisions in this process cannot be accurately derived at compile time. This paper describes a project which integrates runtime parallelization with the advanced compile-time parallelization techniques of SUPERB. Besides the description of implementation techniques, language constructs are proposed, providing means for the specification of irregular computations. SUPERB is an interactive SIMD/MIMD parallelizing system for the Suprenum, iPSC/860 and Genesis-P machines. The implementation of the runtime parallelization is based on the Parti procedures developed at ICASE NASA.<>
当在编译时无法准确地导出对科学程序的决策很重要的信息时,需要对科学程序的并行化提供运行时支持。本文介绍了一个将运行时并行化与先进的编译时并行化技术相结合的方案。在描述实现技术的同时,提出了语言结构,为不规则计算的规范提供了手段。SUPERB是用于Suprenum, iPSC/860和Genesis-P机器的交互式SIMD/MIMD并行系统。运行时并行化的实现基于ICASE NASA开发的Parti程序。
{"title":"SUPERB support for irregular scientific computations","authors":"P. Brezany, M. Gerndt, V. Sipková, H. Zima","doi":"10.1109/SHPCC.1992.232626","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232626","url":null,"abstract":"Runtime support for parallelization of scientific programs is needed when some information important for decisions in this process cannot be accurately derived at compile time. This paper describes a project which integrates runtime parallelization with the advanced compile-time parallelization techniques of SUPERB. Besides the description of implementation techniques, language constructs are proposed, providing means for the specification of irregular computations. SUPERB is an interactive SIMD/MIMD parallelizing system for the Suprenum, iPSC/860 and Genesis-P machines. The implementation of the runtime parallelization is based on the Parti procedures developed at ICASE NASA.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Toward a scalable concurrent architecture for real-time processing of stochastic control and optimization problems 面向实时处理随机控制和优化问题的可扩展并发体系结构
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232689
W. Lee
Reports on the development of a scalable multiple-instruction multiple-data (MIMD) concurrent architecture which is intended to serve as an effective alternative for solving stochastic differential and optimization systems. This architecture has in turn motivated the application of group theory and invariance analysis to acquire further insights in understanding the original problem. The speed-up ratios attained by this architecture can realistically justify its potential deployment in certain real-time applications. A case study related to real-time stochastic control and optimization serve to illustrate this possibility.<>
报告了一种可扩展的多指令多数据(MIMD)并发架构的发展,该架构旨在作为解决随机微分和优化系统的有效替代方案。这种架构反过来又激发了群论和不变性分析的应用,以获得对原始问题的进一步理解。这种体系结构获得的加速比可以实际证明它在某些实时应用程序中的潜在部署。一个与实时随机控制和优化相关的案例研究可以说明这种可能性。
{"title":"Toward a scalable concurrent architecture for real-time processing of stochastic control and optimization problems","authors":"W. Lee","doi":"10.1109/SHPCC.1992.232689","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232689","url":null,"abstract":"Reports on the development of a scalable multiple-instruction multiple-data (MIMD) concurrent architecture which is intended to serve as an effective alternative for solving stochastic differential and optimization systems. This architecture has in turn motivated the application of group theory and invariance analysis to acquire further insights in understanding the original problem. The speed-up ratios attained by this architecture can realistically justify its potential deployment in certain real-time applications. A case study related to real-time stochastic control and optimization serve to illustrate this possibility.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125094093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Massively parallel MIMD solution of the parabolized Navier-Stokes equations 抛物化Navier-Stokes方程的大规模并行MIMD解
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232676
A. Stagg, G. Carey, D. Cline, J. Shadid
Reaching new milestones in science and engineering will require the speed and scalability offered by massively parallel computers. The primary challenge to the users of this technology will be the development of scalable software. All the software's functionality, including the generation of grids, the algorithmic solvers, and the production of output for interpretation and visualization, must scale across multiple processors. As an example of the scalable application concept, the authors have developed a highly parallel, scalable version of a parabolized Navier-Stokes (PNS) code used to simulate steady three-dimensional flow past supersonic and hypersonic flight vehicles. The primary goal of this research has been to develop a fully scalable version of the PNS procedure and to demonstrate that it can achieve high performance on a massively parallel, multiple instruction multiple data (MIMD) computer.<>
在科学和工程领域达到新的里程碑将需要大规模并行计算机提供的速度和可扩展性。这项技术的用户面临的主要挑战将是可扩展软件的开发。所有软件的功能,包括网格的生成、算法求解器以及用于解释和可视化的输出,都必须跨多个处理器进行扩展。作为可扩展应用概念的一个例子,作者开发了一个高度并行的可扩展版本的抛物化纳维-斯托克斯(PNS)代码,用于模拟超音速和高超音速飞行器的稳定三维流动。本研究的主要目标是开发一个完全可扩展的PNS程序版本,并证明它可以在大规模并行、多指令多数据(MIMD)计算机上实现高性能。
{"title":"Massively parallel MIMD solution of the parabolized Navier-Stokes equations","authors":"A. Stagg, G. Carey, D. Cline, J. Shadid","doi":"10.1109/SHPCC.1992.232676","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232676","url":null,"abstract":"Reaching new milestones in science and engineering will require the speed and scalability offered by massively parallel computers. The primary challenge to the users of this technology will be the development of scalable software. All the software's functionality, including the generation of grids, the algorithmic solvers, and the production of output for interpretation and visualization, must scale across multiple processors. As an example of the scalable application concept, the authors have developed a highly parallel, scalable version of a parabolized Navier-Stokes (PNS) code used to simulate steady three-dimensional flow past supersonic and hypersonic flight vehicles. The primary goal of this research has been to develop a fully scalable version of the PNS procedure and to demonstrate that it can achieve high performance on a massively parallel, multiple instruction multiple data (MIMD) computer.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121957569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parallelization of AMBER molecular dynamics program for the AP1000 highly parallel computer AP1000高并行计算机上AMBER分子动力学程序的并行化
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232680
H. Sato, Y. Tanaka, H. Iwama, S. Kawakika, M. Saito, K. Morikami, T. Yao, S. Tsutsumi
The authors have parallelized the AMBER molecular dynamics program for the AP1000 highly parallel computer. To obtain a high degree of parallelism and an even load balance between processors for model problems of protein and water molecules, protein amino acid residues and water molecules are distributed to processors randomly. Global interprocessor communication required by this data mapping is efficiently done using the AP1000 broadcast network, to broadcast atom coordinate data for other processors' reference and its torus network; also for point-to-point communication to accumulate forces for atoms assigned to other processors. Experiments showed that a problem with 41095 atoms is processed 226 times faster with a 512 processor AP1000 than by a single processor.<>
作者在AP1000高并行计算机上并行化了AMBER分子动力学程序。为了使蛋白质和水分子模型问题的处理机之间具有高度的并行性和均匀的负载平衡,将蛋白质氨基酸残基和水分子随机分布到处理机中。该数据映射所需的全局处理器间通信利用AP1000广播网络高效完成,广播原子坐标数据供其他处理器及其环面网络参考;也用于点对点通信,为分配给其他处理器的原子积累力。实验表明,用512处理器AP1000处理41095个原子的问题比用单个处理器快226倍。
{"title":"Parallelization of AMBER molecular dynamics program for the AP1000 highly parallel computer","authors":"H. Sato, Y. Tanaka, H. Iwama, S. Kawakika, M. Saito, K. Morikami, T. Yao, S. Tsutsumi","doi":"10.1109/SHPCC.1992.232680","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232680","url":null,"abstract":"The authors have parallelized the AMBER molecular dynamics program for the AP1000 highly parallel computer. To obtain a high degree of parallelism and an even load balance between processors for model problems of protein and water molecules, protein amino acid residues and water molecules are distributed to processors randomly. Global interprocessor communication required by this data mapping is efficiently done using the AP1000 broadcast network, to broadcast atom coordinate data for other processors' reference and its torus network; also for point-to-point communication to accumulate forces for atoms assigned to other processors. Experiments showed that a problem with 41095 atoms is processed 226 times faster with a 512 processor AP1000 than by a single processor.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128074128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A runtime data mapping scheme for irregular problems 针对不规则问题的运行时数据映射方案
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232642
R. Ponnusamy, J. Saltz, R. Das
In scalable multiprocessor systems, high performance demands that computational load be balanced evenly among processors and that interprocessor communication be limited as much as possible. In this paper, the authors study the problem of automatically choosing data distributions for irregular problems. Irregular problems are programs where the data access pattern cannot be determined during compilation. The authors describe a method by which data arrays can be automatically mapped at runtime. The mapping is based on the computational patterns in one or more user-specified loops. A distributed memory compiler generates code that, at runtime, generates a distributed data structure to represent the computational pattern of the chosen loop. This computational pattern is used to determine how data arrays are to be partitioned. The compiler generates code to pass the distributed data structure to a partitioner. The work described is being pursued in the context of the CRPC Fortran D project.<>
在可扩展的多处理器系统中,高性能要求计算负载在处理器之间均衡,并且尽可能限制处理器间的通信。本文研究了不规则问题中数据分布的自动选择问题。不规则问题是指在编译过程中无法确定数据访问模式的程序。作者描述了一种在运行时自动映射数据数组的方法。该映射基于一个或多个用户指定循环中的计算模式。分布式内存编译器生成的代码在运行时生成分布式数据结构,以表示所选循环的计算模式。此计算模式用于确定如何对数据数组进行分区。编译器生成代码,将分布式数据结构传递给分区器。所描述的工作是在CRPC Fortran D项目的背景下进行的。
{"title":"A runtime data mapping scheme for irregular problems","authors":"R. Ponnusamy, J. Saltz, R. Das","doi":"10.1109/SHPCC.1992.232642","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232642","url":null,"abstract":"In scalable multiprocessor systems, high performance demands that computational load be balanced evenly among processors and that interprocessor communication be limited as much as possible. In this paper, the authors study the problem of automatically choosing data distributions for irregular problems. Irregular problems are programs where the data access pattern cannot be determined during compilation. The authors describe a method by which data arrays can be automatically mapped at runtime. The mapping is based on the computational patterns in one or more user-specified loops. A distributed memory compiler generates code that, at runtime, generates a distributed data structure to represent the computational pattern of the chosen loop. This computational pattern is used to determine how data arrays are to be partitioned. The compiler generates code to pass the distributed data structure to a partitioner. The work described is being pursued in the context of the CRPC Fortran D project.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125615629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Parallel solution of the generalized Helmholtz equation 广义亥姆霍兹方程的并行解
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232654
L. Freitag, J. Ortega
Uses the reduced system conjugate gradient algorithm to find the solution of large, sparse, symmetric, positive definite systems of linear equations arising from finite difference discretization of the generalized Helmholtz equation. The authors examine in detail three spatial domain decompositions on distributed memory machines. They use a two-step damped Jacobi preconditioner for the Schur complement system and find that although the number of iterations required for convergence is nearly halved, overall solution time is slightly increased. The authors introduce a modification to the preconditioner in order to reduce overhead.<>
利用简化系统共轭梯度算法求解由广义亥姆霍兹方程的有限差分离散化引起的大型、稀疏、对称、正定线性方程组。作者详细研究了分布式存储机上的三种空间域分解。他们对Schur补系统使用了两步阻尼Jacobi预条件,并发现尽管收敛所需的迭代次数几乎减少了一半,但总体求解时间略有增加。为了减少开销,作者对前置条件进行了修改。
{"title":"Parallel solution of the generalized Helmholtz equation","authors":"L. Freitag, J. Ortega","doi":"10.1109/SHPCC.1992.232654","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232654","url":null,"abstract":"Uses the reduced system conjugate gradient algorithm to find the solution of large, sparse, symmetric, positive definite systems of linear equations arising from finite difference discretization of the generalized Helmholtz equation. The authors examine in detail three spatial domain decompositions on distributed memory machines. They use a two-step damped Jacobi preconditioner for the Schur complement system and find that although the number of iterations required for convergence is nearly halved, overall solution time is slightly increased. The authors introduce a modification to the preconditioner in order to reduce overhead.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129362056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An object oriented approach to boundary conditions in finite difference fluid dynamics codes 有限差分流体力学代码中边界条件的面向对象方法
Pub Date : 1992-04-26 DOI: 10.1109/SHPCC.1992.232659
I. Angus
Parallel computers have been used to solve computational fluid dynamics (CFD) problems for many years; however, while the hardware has greatly improved, the software methods for describing CFD algorithms have remained largely unchanged. From the physics and software engineering points of view, the boundary conditions consume most of the algorithmic development and programming time, but only a small part of the execution time. This paper describes a methodology that eliminates most of the coding work that is required to implement boundary conditions thereby freeing the researcher to concentrate his time on the algorithms.<>
并行计算机用于求解计算流体动力学(CFD)问题已有多年历史;然而,虽然硬件有了很大的改进,但描述CFD算法的软件方法基本保持不变。从物理和软件工程的角度来看,边界条件消耗了大部分算法开发和编程时间,但只占用了一小部分执行时间。本文描述了一种方法,它消除了实现边界条件所需的大部分编码工作,从而使研究人员能够将时间集中在算法上
{"title":"An object oriented approach to boundary conditions in finite difference fluid dynamics codes","authors":"I. Angus","doi":"10.1109/SHPCC.1992.232659","DOIUrl":"https://doi.org/10.1109/SHPCC.1992.232659","url":null,"abstract":"Parallel computers have been used to solve computational fluid dynamics (CFD) problems for many years; however, while the hardware has greatly improved, the software methods for describing CFD algorithms have remained largely unchanged. From the physics and software engineering points of view, the boundary conditions consume most of the algorithmic development and programming time, but only a small part of the execution time. This paper describes a methodology that eliminates most of the coding work that is required to implement boundary conditions thereby freeing the researcher to concentrate his time on the algorithms.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127699595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings Scalable High Performance Computing Conference SHPCC-92.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1