Method for scalable and performant GPU-accelerated simulation of multiphase compressible flow

IF 7.2 2区 物理与天体物理 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Computer Physics Communications Pub Date : 2024-05-13 DOI:10.1016/j.cpc.2024.109238
Anand Radhakrishnan , Henry Le Berre , Benjamin Wilfong , Jean-Sebastien Spratt , Mauro Rodriguez Jr. , Tim Colonius , Spencer H. Bryngelson
{"title":"Method for scalable and performant GPU-accelerated simulation of multiphase compressible flow","authors":"Anand Radhakrishnan ,&nbsp;Henry Le Berre ,&nbsp;Benjamin Wilfong ,&nbsp;Jean-Sebastien Spratt ,&nbsp;Mauro Rodriguez Jr. ,&nbsp;Tim Colonius ,&nbsp;Spencer H. Bryngelson","doi":"10.1016/j.cpc.2024.109238","DOIUrl":null,"url":null,"abstract":"<div><p>Multiphase compressible flows are often characterized by a broad range of space and time scales, entailing large grids and small time steps. Simulations of these flows on CPU-based clusters can thus take several wall-clock days. Offloading the compute kernels to GPUs appears attractive but is memory-bound for many finite-volume and -difference methods, damping speedups. Even when realized, GPU-based kernels lead to more intrusive communication and I/O times owing to lower computation costs. We present a strategy for GPU acceleration of multiphase compressible flow solvers that addresses these challenges and obtains large speedups at scale. We use OpenACC for directive-based offloading of all compute kernels while maintaining low-level control when needed. An established Fortran preprocessor and metaprogramming tool, Fypp, enables otherwise hidden compile-time optimizations. This strategy exposes compile-time optimizations and high memory reuse while retaining readable, maintainable, and compact code. Remote direct memory access realized via CUDA-aware MPI and GPUDirect reduces halo-exchange communication time. We implement this approach in the open-source solver MFC <span>[1]</span>. Metaprogramming results in an 8-times speedup of the most expensive kernels compared to a statically compiled program, reaching 46% of peak FLOPs on modern NVIDIA GPUs and high arithmetic intensity (about 10 FLOPs/byte). In representative simulations, a single NVIDIA A100 GPU is 7-times faster compared to an Intel Xeon Cascade Lake (6248) CPU die, or about 300-times faster compared to a single such CPU core. At the same time, near-ideal (97%) weak scaling is observed for at least 13824 GPUs on OLCF Summit. A strong scaling efficiency of 84% is retained for an 8-times increase in GPU count. Collective I/O, implemented via MPI3, helps ensure the negligible contribution of data transfers (<span><math><mo>&lt;</mo><mn>1</mn><mtext>%</mtext></math></span> of the wall time for a typical, large simulation). Large many-GPU simulations of compressible (solid-)liquid-gas flows demonstrate the practical utility of this strategy.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Physics Communications","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010465524001619","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Multiphase compressible flows are often characterized by a broad range of space and time scales, entailing large grids and small time steps. Simulations of these flows on CPU-based clusters can thus take several wall-clock days. Offloading the compute kernels to GPUs appears attractive but is memory-bound for many finite-volume and -difference methods, damping speedups. Even when realized, GPU-based kernels lead to more intrusive communication and I/O times owing to lower computation costs. We present a strategy for GPU acceleration of multiphase compressible flow solvers that addresses these challenges and obtains large speedups at scale. We use OpenACC for directive-based offloading of all compute kernels while maintaining low-level control when needed. An established Fortran preprocessor and metaprogramming tool, Fypp, enables otherwise hidden compile-time optimizations. This strategy exposes compile-time optimizations and high memory reuse while retaining readable, maintainable, and compact code. Remote direct memory access realized via CUDA-aware MPI and GPUDirect reduces halo-exchange communication time. We implement this approach in the open-source solver MFC [1]. Metaprogramming results in an 8-times speedup of the most expensive kernels compared to a statically compiled program, reaching 46% of peak FLOPs on modern NVIDIA GPUs and high arithmetic intensity (about 10 FLOPs/byte). In representative simulations, a single NVIDIA A100 GPU is 7-times faster compared to an Intel Xeon Cascade Lake (6248) CPU die, or about 300-times faster compared to a single such CPU core. At the same time, near-ideal (97%) weak scaling is observed for at least 13824 GPUs on OLCF Summit. A strong scaling efficiency of 84% is retained for an 8-times increase in GPU count. Collective I/O, implemented via MPI3, helps ensure the negligible contribution of data transfers (<1% of the wall time for a typical, large simulation). Large many-GPU simulations of compressible (solid-)liquid-gas flows demonstrate the practical utility of this strategy.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GPU 加速多相可压缩流的可扩展高性能模拟方法
多相可压缩流通常具有空间和时间尺度范围广的特点,需要大网格和小时间步长。因此,在基于 CPU 的集群上对这些流动进行模拟可能需要数天的时间。将计算内核卸载到 GPU 上似乎很有吸引力,但对于许多有限体积和差分方法来说,GPU 会受到内存限制,从而影响速度。即使实现了基于 GPU 的内核,由于计算成本较低,也会导致更多的通信和 I/O 时间。我们提出了一种 GPU 加速多相可压缩流求解器的策略,以应对这些挑战,并获得大规模加速。我们使用 OpenACC 对所有计算内核进行基于指令的卸载,同时在需要时保持底层控制。成熟的 Fortran 预处理器和元编程工具 Fypp 可以实现原本隐藏的编译时优化。这种策略既能实现编译时优化和高内存重用,又能保持代码的可读性、可维护性和紧凑性。通过 CUDA 感知 MPI 和 GPUDirect 实现的远程直接内存访问缩短了光环交换通信时间。我们在开源求解器 MFC [1] 中实现了这种方法。与静态编译的程序相比,元编程使最昂贵的内核速度提高了 8 倍,在现代英伟达™(NVIDIA®)GPU 和高算术强度(约 10 FLOPs/字节)条件下达到了峰值 FLOPs 的 46%。在代表性模拟中,单个英伟达 A100 GPU 的速度是英特尔至强 Cascade Lake(6248)CPU 芯片的 7 倍,或单个此类 CPU 内核的 300 倍。同时,在 OLCF Summit 上观察到至少 13824 个 GPU 的弱扩展效率接近理想状态(97%)。GPU 数量增加 8 倍时,仍能保持 84% 的强扩展效率。通过 MPI3 实现的集体 I/O 有助于确保数据传输的贡献微乎其微(占典型大型仿真墙时间的 1%)。对可压缩(固)液-气流的多 GPU 大型仿真证明了这一策略的实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Physics Communications
Computer Physics Communications 物理-计算机:跨学科应用
CiteScore
12.10
自引率
3.20%
发文量
287
审稿时长
5.3 months
期刊介绍: The focus of CPC is on contemporary computational methods and techniques and their implementation, the effectiveness of which will normally be evidenced by the author(s) within the context of a substantive problem in physics. Within this setting CPC publishes two types of paper. Computer Programs in Physics (CPiP) These papers describe significant computer programs to be archived in the CPC Program Library which is held in the Mendeley Data repository. The submitted software must be covered by an approved open source licence. Papers and associated computer programs that address a problem of contemporary interest in physics that cannot be solved by current software are particularly encouraged. Computational Physics Papers (CP) These are research papers in, but are not limited to, the following themes across computational physics and related disciplines. mathematical and numerical methods and algorithms; computational models including those associated with the design, control and analysis of experiments; and algebraic computation. Each will normally include software implementation and performance details. The software implementation should, ideally, be available via GitHub, Zenodo or an institutional repository.In addition, research papers on the impact of advanced computer architecture and special purpose computers on computing in the physical sciences and software topics related to, and of importance in, the physical sciences may be considered.
期刊最新文献
A novel model for direct numerical simulation of suspension dynamics with arbitrarily shaped convex particles Editorial Board Study α decay and proton emission based on data-driven symbolic regression Efficient determination of free energies of non-ideal solid solutions via hybrid Monte Carlo simulations 1D drift-kinetic numerical model based on semi-implicit particle-in-cell method
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1