Maochao Xiao , Alessandro Ceci , Pedro Costa , Johan Larsson , Sergio Pirozzoli
{"title":"CaLES: A GPU-accelerated solver for large-eddy simulation of wall-bounded flows","authors":"Maochao Xiao , Alessandro Ceci , Pedro Costa , Johan Larsson , Sergio Pirozzoli","doi":"10.1016/j.cpc.2025.109546","DOIUrl":null,"url":null,"abstract":"<div><div>We introduce CaLES, a GPU-accelerated finite-difference solver designed for large-eddy simulations (LES) of incompressible wall-bounded flows in massively parallel environments. Built upon the existing direct numerical simulation (DNS) solver CaNS, CaLES relies on low-storage, third-order Runge-Kutta schemes for temporal discretization, with the option to treat viscous terms via an implicit Crank-Nicolson scheme in one or three directions. A fast direct solver, based on eigenfunction expansions, is used to solve the discretized Poisson/Helmholtz equations. For turbulence modeling, the classical Smagorinsky model with van Driest near-wall damping and the dynamic Smagorinsky model are implemented, along with a logarithmic law wall model. GPU acceleration is achieved through OpenACC directives, following CaNS-2.3.0. Performance assessments were conducted on the Leonardo cluster at CINECA, Italy. Each node is equipped with one Intel Xeon Platinum 8358 CPU (2.60 GHz, 32 cores) and four NVIDIA A100 GPUs (64 GB HBM2e), interconnected via NVLink 3.0 (200 GB/s). The inter-node communication bandwidth is 25 GB/s, supported by a DragonFly+ network architecture with NVIDIA Mellanox InfiniBand HDR. Results indicate that the computational speed on a single GPU is equivalent to approximately 15 CPU nodes, depending on the treatment of viscous terms and the subgrid-scale model, and that the solver efficiently scales across multiple GPUs. The predictive capability of CaLES has been tested using multiple flow cases, including decaying isotropic turbulence, turbulent channel flow, and turbulent duct flow. The high computational efficiency of the solver enables grid convergence studies on extremely fine grids, pinpointing non-monotonic grid convergence for wall-modeled LES.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"310 ","pages":"Article 109546"},"PeriodicalIF":7.2000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Physics Communications","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010465525000499","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
We introduce CaLES, a GPU-accelerated finite-difference solver designed for large-eddy simulations (LES) of incompressible wall-bounded flows in massively parallel environments. Built upon the existing direct numerical simulation (DNS) solver CaNS, CaLES relies on low-storage, third-order Runge-Kutta schemes for temporal discretization, with the option to treat viscous terms via an implicit Crank-Nicolson scheme in one or three directions. A fast direct solver, based on eigenfunction expansions, is used to solve the discretized Poisson/Helmholtz equations. For turbulence modeling, the classical Smagorinsky model with van Driest near-wall damping and the dynamic Smagorinsky model are implemented, along with a logarithmic law wall model. GPU acceleration is achieved through OpenACC directives, following CaNS-2.3.0. Performance assessments were conducted on the Leonardo cluster at CINECA, Italy. Each node is equipped with one Intel Xeon Platinum 8358 CPU (2.60 GHz, 32 cores) and four NVIDIA A100 GPUs (64 GB HBM2e), interconnected via NVLink 3.0 (200 GB/s). The inter-node communication bandwidth is 25 GB/s, supported by a DragonFly+ network architecture with NVIDIA Mellanox InfiniBand HDR. Results indicate that the computational speed on a single GPU is equivalent to approximately 15 CPU nodes, depending on the treatment of viscous terms and the subgrid-scale model, and that the solver efficiently scales across multiple GPUs. The predictive capability of CaLES has been tested using multiple flow cases, including decaying isotropic turbulence, turbulent channel flow, and turbulent duct flow. The high computational efficiency of the solver enables grid convergence studies on extremely fine grids, pinpointing non-monotonic grid convergence for wall-modeled LES.
期刊介绍:
The focus of CPC is on contemporary computational methods and techniques and their implementation, the effectiveness of which will normally be evidenced by the author(s) within the context of a substantive problem in physics. Within this setting CPC publishes two types of paper.
Computer Programs in Physics (CPiP)
These papers describe significant computer programs to be archived in the CPC Program Library which is held in the Mendeley Data repository. The submitted software must be covered by an approved open source licence. Papers and associated computer programs that address a problem of contemporary interest in physics that cannot be solved by current software are particularly encouraged.
Computational Physics Papers (CP)
These are research papers in, but are not limited to, the following themes across computational physics and related disciplines.
mathematical and numerical methods and algorithms;
computational models including those associated with the design, control and analysis of experiments; and
algebraic computation.
Each will normally include software implementation and performance details. The software implementation should, ideally, be available via GitHub, Zenodo or an institutional repository.In addition, research papers on the impact of advanced computer architecture and special purpose computers on computing in the physical sciences and software topics related to, and of importance in, the physical sciences may be considered.