Performance evaluation of MPI implementations and MPI based Parallel ELLPACK solvers

S. Markus, S. B. Kim, K. Pantazopoulos, A. L. Ocken, E. Houstis, P. Wu, S. Weerawarana, D. Maharry
{"title":"Performance evaluation of MPI implementations and MPI based Parallel ELLPACK solvers","authors":"S. Markus, S. B. Kim, K. Pantazopoulos, A. L. Ocken, E. Houstis, P. Wu, S. Weerawarana, D. Maharry","doi":"10.1109/MPIDC.1996.534109","DOIUrl":null,"url":null,"abstract":"We are concerned with the parallelization of finite element mesh generation and its decomposition, and the parallel solution of sparse algebraic equations which are obtained from the parallel discretization of second order elliptic partial differential equations (PDEs) using finite difference and finite element techniques. For this we use the Parallel ELLPACK (//ELLPACK) problem solving environment (PSE) which supports PDE computations on several MIMD platforms. We have considered the ITPACK library of stationary iterative solvers which we have parallelized and integrated into the //ELLPACK PSE. This Parallel ITPACK package has been implemented using the MPI, PVM, PICL, PARMACS, nCUBE Vertex and Intel NX message passing communication libraries. It performs very efficiently on a variety of hardware and communication platforms. To study the efficiency of three MPI library implementations, the performance of the Parallel ITPACK solvers was measured on several distributed memory architectures and on clusters of workstations for a testbed of elliptic boundary value PDE problems. We present a comparison of these MPI library implementations with PVM and the native communication libraries, based on their performance on these tests. Moreover we have implemented in MPI, a parallel mesh generator that concurrently produces a semi-optimal partitioning of the mesh to support various domain decomposition solution strategies across the above platforms.","PeriodicalId":432081,"journal":{"name":"Proceedings. Second MPI Developer's Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Second MPI Developer's Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MPIDC.1996.534109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

We are concerned with the parallelization of finite element mesh generation and its decomposition, and the parallel solution of sparse algebraic equations which are obtained from the parallel discretization of second order elliptic partial differential equations (PDEs) using finite difference and finite element techniques. For this we use the Parallel ELLPACK (//ELLPACK) problem solving environment (PSE) which supports PDE computations on several MIMD platforms. We have considered the ITPACK library of stationary iterative solvers which we have parallelized and integrated into the //ELLPACK PSE. This Parallel ITPACK package has been implemented using the MPI, PVM, PICL, PARMACS, nCUBE Vertex and Intel NX message passing communication libraries. It performs very efficiently on a variety of hardware and communication platforms. To study the efficiency of three MPI library implementations, the performance of the Parallel ITPACK solvers was measured on several distributed memory architectures and on clusters of workstations for a testbed of elliptic boundary value PDE problems. We present a comparison of these MPI library implementations with PVM and the native communication libraries, based on their performance on these tests. Moreover we have implemented in MPI, a parallel mesh generator that concurrently produces a semi-optimal partitioning of the mesh to support various domain decomposition solution strategies across the above platforms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MPI实现和基于MPI的并行ELLPACK求解器的性能评估
本文研究了有限元网格生成及其分解的并行化问题,以及利用有限差分和有限元技术对二阶椭圆型偏微分方程进行并行离散得到的稀疏代数方程的并行解问题。为此,我们使用Parallel ELLPACK (//ELLPACK)问题解决环境(PSE),该环境支持在多个MIMD平台上进行PDE计算。我们考虑了ITPACK平稳迭代求解器库,并将其并行化并集成到//ELLPACK PSE中。这个并行ITPACK包已经使用MPI, PVM, PICL, PARMACS, nCUBE Vertex和Intel NX消息传递通信库实现。它在各种硬件和通信平台上都能非常高效地运行。为了研究三种MPI库实现的效率,在几种分布式存储架构和一个椭圆边值PDE问题测试平台的工作站集群上测试了并行ITPACK求解器的性能。我们将这些MPI库实现与PVM和本机通信库进行比较,基于它们在这些测试中的性能。此外,我们还在MPI中实现了一个并行网格生成器,该生成器可以同时生成网格的半最优划分,以支持跨上述平台的各种域分解解决方案策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Parallel molecular dynamics visualization using MPI with MPE graphics Early implementation of Para++ with MPI-2 Generalized communicators in the Message Passing Interface MPI performance on the SGI Power Challenge Cone beam tomography using MPI on heterogeneous workstation clusters
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1