使用数十亿个单元网格进行工业级反应流化床模拟的高性能计算挑战与机遇

IF 4.5 2区 工程技术 Q2 ENGINEERING, CHEMICAL Powder Technology Pub Date : 2024-06-18 DOI:10.1016/j.powtec.2024.120018
Hervé Neau , Renaud Ansart , Cyril Baudry , Yvan Fournier , Nicolas Mérigoux , Chaï Koren , Jérome Laviéville , Nicolas Renon , Olivier Simonin
{"title":"使用数十亿个单元网格进行工业级反应流化床模拟的高性能计算挑战与机遇","authors":"Hervé Neau ,&nbsp;Renaud Ansart ,&nbsp;Cyril Baudry ,&nbsp;Yvan Fournier ,&nbsp;Nicolas Mérigoux ,&nbsp;Chaï Koren ,&nbsp;Jérome Laviéville ,&nbsp;Nicolas Renon ,&nbsp;Olivier Simonin","doi":"10.1016/j.powtec.2024.120018","DOIUrl":null,"url":null,"abstract":"<div><p>Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources.</p><p>The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells.</p><p>This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales.</p><p>On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency <span><math><mo>&gt;</mo></math></span>85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations.</p><p>Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parallel and distributed computing. However, our last findings show that currently implemented MPI/Multigrid approaches are not sufficient to fully benefit from pre-Exascale system. This work allows us to identify current bottlenecks in neptune_cfd and to formulate guidelines for an upcoming Exascale-ready version of this code that will hopefully be able to manage even the most complex industrial-scale gas–particle systems.</p></div>","PeriodicalId":407,"journal":{"name":"Powder Technology","volume":null,"pages":null},"PeriodicalIF":4.5000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HPC challenges and opportunities of industrial-scale reactive fluidized bed simulation using meshes of several billion cells on the route of Exascale\",\"authors\":\"Hervé Neau ,&nbsp;Renaud Ansart ,&nbsp;Cyril Baudry ,&nbsp;Yvan Fournier ,&nbsp;Nicolas Mérigoux ,&nbsp;Chaï Koren ,&nbsp;Jérome Laviéville ,&nbsp;Nicolas Renon ,&nbsp;Olivier Simonin\",\"doi\":\"10.1016/j.powtec.2024.120018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources.</p><p>The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells.</p><p>This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales.</p><p>On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency <span><math><mo>&gt;</mo></math></span>85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations.</p><p>Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parallel and distributed computing. However, our last findings show that currently implemented MPI/Multigrid approaches are not sufficient to fully benefit from pre-Exascale system. This work allows us to identify current bottlenecks in neptune_cfd and to formulate guidelines for an upcoming Exascale-ready version of this code that will hopefully be able to manage even the most complex industrial-scale gas–particle systems.</p></div>\",\"PeriodicalId\":407,\"journal\":{\"name\":\"Powder Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Powder Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0032591024006624\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, CHEMICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Powder Technology","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0032591024006624","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0

摘要

流化床反应器内的气固流动非常复杂:多尺度、耦合、反应、湍流和不稳定。在欧拉流体框架内对其进行计算,会导致学术规模的数值模拟成本大幅上升,而在工业规模上则更为昂贵。工业规模的气体颗粒流化床三维数值模拟受到计算流体动力学(CFD)软件的高性能计算(HPC)能力和可用计算能力的限制。近年来,超大规模前超级计算机投入使用,能效更高,计算资源也在不断增加。本文是之前工作的直接延续,Neau 等人(2020 年)的工作证明了大规模并行模拟工业规模多分散流化床反应器的可行性,该反应器的网格有 10 亿个单元。从那时起,我们试图通过在更新、更强大的超级计算机上进行大规模计算,将这些系统的模拟推向极限,再次使用了这些超级计算机的全部(多达 286,000 个内核)。我们使用了相同的流化床反应器,但使用了更精细的非结构网格:80 亿和 640 亿个单元。本文重点介绍了在网格为 10、80 和 640 亿个单元的几台超级计算机上测量的 neptune_cfd 代码(基于欧拉流体方法)的效率和性能。在这些使用超大规模前超级计算机和 neptune_cfd 对工业规模系统进行的高度精炼模拟的基础上,我们确定了在网格大小、MPI 进程数量和模拟时间方面可以有效管理的模拟上限。十亿单元计算是最精细的生产计算。从高性能计算的角度来看,80 亿个单元的计算效率可达 60,000 个内核,效率为 85%,但仍然非常昂贵。重新启动和网格文件的大小非常大,后处理复杂,数据管理几乎不可能。640 亿个单元的计算超出了所有限制:求解器、超级计算机、MPI、文件大小、后处理、数据管理。在过去的 30 年里,镎銙超级计算机通过跟踪硬件演进以及采用最先进的并行和分布式计算技术,其计算能力呈指数级增长。然而,我们最近的研究结果表明,目前实施的MPI/Multigrid方法还不足以从预埃克斯级系统中充分获益。通过这项工作,我们发现了 neptune_cfd 目前存在的瓶颈,并为即将推出的 Exascale 就绪版代码制定了指导方针,希望该版本能够管理最复杂的工业级气体粒子系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HPC challenges and opportunities of industrial-scale reactive fluidized bed simulation using meshes of several billion cells on the route of Exascale

Inside fluidized bed reactors, gas–solid flows are very complex: multi-scale, coupled, reactive, turbulent and unsteady. Accounting for them in an Euler-nfluid framework induces significantly expensive numerical simulations at academic scales and even more at industrial scales. 3D numerical simulations of gas–particle fluidized beds at industrial scales are limited by the High Performances Computing (HPC) capabilities of Computational Fluid Dynamics (CFD) software and by available computational power. In recent years, pre-Exascale supercomputers came into operation with better energy efficiency and continuously increasing computational resources.

The present article is a direct continuation of previous work, Neau et al. (2020) which demonstrated the feasibility of a massively parallel simulation of an industrial-scale polydispersed fluidized-bed reactor with a mesh of 1 billion cells. Since then, we tried to push simulations of these systems to their limits by performing large-scale computations on even more recent and powerful supercomputers, once again using up to the entirety of these supercomputers (up to 286,000 cores). We used the same fluidized bed reactor but with more refined unstructured meshes: 8 and 64 billion cells.

This article focuses on efficiency and performances of neptune_cfd code (based on Euler-nfluid approach) measured on several supercomputers with meshes of 1, 8 and 64 billion cells. It presents sensitivity studies conducted to improve HPC at these very large scales.

On the basis of these highly-refined simulations of industrial scale systems using pre-Exascale supercomputers with neptune_cfd, we defined the upper limits of simulations we can manage efficiently in terms of mesh size, count of MPI processes and of simulation time. One billion cells computations are the most refined computation for production. Eight billion cells computations perform well up to 60,000 cores from a HPC point of view with an efficiency >85% but are still very expensive. The size of restart and mesh files is very large, post-processing is complicated and data management becomes near-impossible. 64 billion cells computations go beyond all limits: solver, supercomputer, MPI, file size, post-processing, data management. For these reasons, we barely managed to execute more than a few iterations.

Over the last 30 years, neptune_cfd HPC capabilities improved exponentially by tracking hardware evolution and by implementing state-of-the-art techniques for parallel and distributed computing. However, our last findings show that currently implemented MPI/Multigrid approaches are not sufficient to fully benefit from pre-Exascale system. This work allows us to identify current bottlenecks in neptune_cfd and to formulate guidelines for an upcoming Exascale-ready version of this code that will hopefully be able to manage even the most complex industrial-scale gas–particle systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Powder Technology
Powder Technology 工程技术-工程:化工
CiteScore
9.90
自引率
15.40%
发文量
1047
审稿时长
46 days
期刊介绍: Powder Technology is an International Journal on the Science and Technology of Wet and Dry Particulate Systems. Powder Technology publishes papers on all aspects of the formation of particles and their characterisation and on the study of systems containing particulate solids. No limitation is imposed on the size of the particles, which may range from nanometre scale, as in pigments or aerosols, to that of mined or quarried materials. The following list of topics is not intended to be comprehensive, but rather to indicate typical subjects which fall within the scope of the journal's interests: Formation and synthesis of particles by precipitation and other methods. Modification of particles by agglomeration, coating, comminution and attrition. Characterisation of the size, shape, surface area, pore structure and strength of particles and agglomerates (including the origins and effects of inter particle forces). Packing, failure, flow and permeability of assemblies of particles. Particle-particle interactions and suspension rheology. Handling and processing operations such as slurry flow, fluidization, pneumatic conveying. Interactions between particles and their environment, including delivery of particulate products to the body. Applications of particle technology in production of pharmaceuticals, chemicals, foods, pigments, structural, and functional materials and in environmental and energy related matters. For materials-oriented contributions we are looking for articles revealing the effect of particle/powder characteristics (size, morphology and composition, in that order) on material performance or functionality and, ideally, comparison to any industrial standard.
期刊最新文献
Study on ore dust pollution diffusion and new dust removal system in drawing funnel operation of metal mine Effect of grinding media on surface property and flotation performance of ilmenite An optimization framework for achieving optimal hydrocyclone's performance aligning with decision-makers' preferences Effect of airflow velocity in vessel-pipelines on dust explosion during pneumatic conveying Drying kinetics of CeO2-ZrO2 ceramic powders under microwave heating based on a thin-layer drying model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1