Pub Date : 2026-04-01Epub Date: 2026-01-09DOI: 10.1016/j.cpc.2026.110023
Xuning Zhao , Wentao Ma , Shafquat Islam , Aditya Narkhede , Kevin Wang
<div><div>M2C (Multiphysics Modeling and Computation) is an open-source software for simulating multi-material fluid flows and fluid-structure interactions under extreme conditions, such as high pressures, high temperatures, shock waves, and large interface deformations. It employs a finite volume method to solve the compressible Navier-Stokes equations and supports a wide range of thermodynamic equations of state. M2C incorporates models of laser radiation and absorption, phase transition, and ionization, coupled with continuum dynamics. Multi-material interfaces are evolved using a level set method, while fluid-structure interfaces are tracked using an embedded boundary method. Advective fluxes across interfaces are computed using FIVER (FInite Volume method based on Exact multi-material Riemann problems). For two-way fluid-structure interaction, M2C is coupled with the open-source structural dynamics solver Aero-S using a partitioned procedure. The M2C code is written in C++ and parallelized with MPI for high-performance computing. The source package includes a set of example problems for demonstration and user training. Accuracy is verified through benchmark cases such as Riemann problems, interface evolution, single-bubble dynamics, and ionization response. Several multiphysics applications are also presented, including laser-induced thermal cavitation, explosion and blast mitigation, and hypervelocity impact.</div><div><strong>PROGRAM SUMMARY</strong> <em>Program Title:</em> M2C (Multiphysics Modeling and Computation)</div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/gdjrrjwgf4.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/kevinwgy/m2c</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> GNU General Public License 3</div><div><em>Programming language:</em> C++</div><div><em>Supplementary material:</em> The M2C package includes a suite of test cases that illustrate the software’s capabilities. These examples can also serve as templates for setting up new simulations.</div><div><em>Nature of problem:</em> This work addresses the analysis of multi-material fluid flow and fluid-structure interaction problems under conditions involving high pressure, high velocity, high temperature, or a combination of them. In such problems, material compressibility and thermodynamics play a significant role, and the system may exhibit shock waves, large structural deformations, and large deformation of fluid material subdomains. Unlike conventional fluid dynamics problems, the boundaries of the fluid domain and material subdomains are time-dependent, unknown in advance, and must be determined as part of the analysis. Across material interfaces, some state variables (e.g., density) may exhibit jumps of several orders of magnitude, while others (e.g., normal velocity) remain continuous. Some problems may also involve strong
M2C (Multiphysics Modeling and Computation)是一款开源软件,用于模拟高压、高温、冲击波和大界面变形等极端条件下的多材料流体流动和流固相互作用。它采用有限体积法求解可压缩的Navier-Stokes方程,支持多种热力学状态方程。M2C结合了激光辐射和吸收、相变和电离模型,并结合了连续统动力学。采用水平集法对多材料界面进行演化,采用嵌入边界法对流固界面进行跟踪。采用FIVER(基于精确多材料黎曼问题的有限体积法)计算了界面上的对流通量。对于双向流固耦合,M2C与开源结构动力学求解器Aero-S使用分区程序进行耦合。M2C代码是用c++编写的,并与MPI并行进行高性能计算。源代码包包括一组用于演示和用户培训的示例问题。准确性通过基准案例验证,如黎曼问题,界面演化,单泡动力学和电离响应。还介绍了几种多物理场应用,包括激光诱导的热空化、爆炸和爆炸减缓以及超高速撞击。程序摘要程序标题:M2C(多物理场建模和计算)CPC库链接到程序文件:https://doi.org/10.17632/gdjrrjwgf4.1Developer的存储库链接:https://github.com/kevinwgy/m2cLicensing条款:GNU通用公共许可证3编程语言:c++补充材料:M2C包包括一套测试用例,说明软件的能力。这些示例也可以作为设置新模拟的模板。问题的性质:这项工作涉及在高压、高速、高温或它们的组合条件下的多物质流体流动和流固相互作用问题的分析。在这些问题中,材料压缩性和热力学起着重要的作用,系统可能会出现激波、大的结构变形和流体材料子域的大变形。与传统的流体动力学问题不同,流体域和材料子域的边界是时间相关的,事先是未知的,必须作为分析的一部分确定。在材料界面上,一些状态变量(如密度)可能表现出几个数量级的跳跃,而另一些状态变量(如法向速度)保持连续。有些问题还可能涉及强大的外部能量源,如激光,能量沉积与流体动力学相结合。在某些情况下,可能会出现额外的物理过程,如相变(如蒸发)和电离,必须纳入分析。在这项工作中提出的例子问题包括激光诱导空化,水下爆炸,爆炸减缓和超高速弹丸撞击。更广泛地说,这类问题与许多工程和生物医学应用有关,在这些应用中,理解极端条件下的连续介质力学和材料行为是必不可少的。求解方法:M2C的核心是可压缩流动动力学的三维有限体积求解器。它被设计成以模块化的方式支持任意凸状态方程。目前实现了几种模型,包括Noble-Abel硬化气体,Jones-Wilkins-Lee (JWL), mie - grnisen, Tillotson和ANEOS(状态解析方程)的一个例子。这些模型允许M2C分析范围广泛的材料。M2C使用水平集方法跟踪流体材料之间的无质量界面,提供清晰的界面表示,并支持合并和分离等拓扑变化。对于流固界面,采用嵌入边界法,简化了网格生成,可适应较大的结构变形。跨材料界面,M2C采用FIVER (FInite Volume method with Exact multi- materials Riemann problems)方法计算对流通量,该方法在状态变量存在较大跳跃时具有鲁棒性。M2C实现了一个分区过程,可以与外部结构动力学求解器进行双向耦合,在每个时间步交换数据。并与开源的Aero-S求解器进行了流固耦合分析。附加功能包括用于汽化的潜热储层方法和用于物质电离的多物种非理想Saha方程求解器。M2C代码与MPI并行,用于高性能计算,并采用模块化和面向对象原则设计,便于扩展和重用。
{"title":"M2C: An open-source software for multiphysics simulation of compressible multi-material flows and fluid-structure interactions","authors":"Xuning Zhao , Wentao Ma , Shafquat Islam , Aditya Narkhede , Kevin Wang","doi":"10.1016/j.cpc.2026.110023","DOIUrl":"10.1016/j.cpc.2026.110023","url":null,"abstract":"<div><div>M2C (Multiphysics Modeling and Computation) is an open-source software for simulating multi-material fluid flows and fluid-structure interactions under extreme conditions, such as high pressures, high temperatures, shock waves, and large interface deformations. It employs a finite volume method to solve the compressible Navier-Stokes equations and supports a wide range of thermodynamic equations of state. M2C incorporates models of laser radiation and absorption, phase transition, and ionization, coupled with continuum dynamics. Multi-material interfaces are evolved using a level set method, while fluid-structure interfaces are tracked using an embedded boundary method. Advective fluxes across interfaces are computed using FIVER (FInite Volume method based on Exact multi-material Riemann problems). For two-way fluid-structure interaction, M2C is coupled with the open-source structural dynamics solver Aero-S using a partitioned procedure. The M2C code is written in C++ and parallelized with MPI for high-performance computing. The source package includes a set of example problems for demonstration and user training. Accuracy is verified through benchmark cases such as Riemann problems, interface evolution, single-bubble dynamics, and ionization response. Several multiphysics applications are also presented, including laser-induced thermal cavitation, explosion and blast mitigation, and hypervelocity impact.</div><div><strong>PROGRAM SUMMARY</strong> <em>Program Title:</em> M2C (Multiphysics Modeling and Computation)</div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/gdjrrjwgf4.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/kevinwgy/m2c</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> GNU General Public License 3</div><div><em>Programming language:</em> C++</div><div><em>Supplementary material:</em> The M2C package includes a suite of test cases that illustrate the software’s capabilities. These examples can also serve as templates for setting up new simulations.</div><div><em>Nature of problem:</em> This work addresses the analysis of multi-material fluid flow and fluid-structure interaction problems under conditions involving high pressure, high velocity, high temperature, or a combination of them. In such problems, material compressibility and thermodynamics play a significant role, and the system may exhibit shock waves, large structural deformations, and large deformation of fluid material subdomains. Unlike conventional fluid dynamics problems, the boundaries of the fluid domain and material subdomains are time-dependent, unknown in advance, and must be determined as part of the analysis. Across material interfaces, some state variables (e.g., density) may exhibit jumps of several orders of magnitude, while others (e.g., normal velocity) remain continuous. Some problems may also involve strong ","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110023"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145973838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-16DOI: 10.1016/j.cpc.2026.110037
Arman Babakhani , Lev Barash , Itay Hen
We present a universal quantum Monte Carlo algorithm for simulating arbitrary high-spin (spin greater than 1/2) Hamiltonians, based on the recently developed permutation matrix representation (PMR) framework. Our approach extends a previously developed PMR-QMC method for spin-1/2 Hamiltonians [Phys. Rev. Research 6, 013281 (2024)]. Because it does not rely on a local bond decomposition, the method applies equally well to models with arbitrary connectivities, long-range and multi-spin interactions, and its closed-walk formulation allows a natural analysis of sign-problem conditions in terms of cycle weights. To demonstrate its applicability and versatility, we apply our method to spin-1 and spin-3/2 quantum Heisenberg models on the square lattice, as well as to randomly generated high-spin Hamiltonians. Additionally, we show how the approach naturally extends to general Hamiltonians involving mixtures of particle species, including bosons and fermions. We have made our program code freely accessible on GitHub.
{"title":"A quantum Monte Carlo algorithm for arbitrary high-spin Hamiltonians","authors":"Arman Babakhani , Lev Barash , Itay Hen","doi":"10.1016/j.cpc.2026.110037","DOIUrl":"10.1016/j.cpc.2026.110037","url":null,"abstract":"<div><div>We present a universal quantum Monte Carlo algorithm for simulating arbitrary high-spin (spin greater than 1/2) Hamiltonians, based on the recently developed permutation matrix representation (PMR) framework. Our approach extends a previously developed PMR-QMC method for spin-1/2 Hamiltonians [Phys. Rev. Research 6, 013281 (2024)]. Because it does not rely on a local bond decomposition, the method applies equally well to models with arbitrary connectivities, long-range and multi-spin interactions, and its closed-walk formulation allows a natural analysis of sign-problem conditions in terms of cycle weights. To demonstrate its applicability and versatility, we apply our method to spin-1 and spin-3/2 quantum Heisenberg models on the square lattice, as well as to randomly generated high-spin Hamiltonians. Additionally, we show how the approach naturally extends to general Hamiltonians involving mixtures of particle species, including bosons and fermions. We have made our program code freely accessible on GitHub.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110037"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-15DOI: 10.1016/j.cpc.2026.110026
Jia-Chen Dai, Feng Feng, Ming-Ming Liu
The version 1.2 of the HepLib (a C++Library for computations in High Energy Physics) is presented. HepLib builds on top of other well-established libraries or programs, including GINAC, FLINT, FORM, FIRE, etc., its first version has been released in Comput. Phys. Commun. 265, 107,982 (2021). Here we provide another minor upgraded version 1.2, in which the internal depended libraries or programs are updated to their latest versions, several bugs are fixed, many functional performances are improved, and lots of new features are also introduced. We also carry out experimental tests on the program FIRE, employing FLINT to enhance its performance with multivariate polynomials in the integrate-by-parts (IBP) reduction.
{"title":"HepLib: a C++ library for high energy physics (version 1.2)","authors":"Jia-Chen Dai, Feng Feng, Ming-Ming Liu","doi":"10.1016/j.cpc.2026.110026","DOIUrl":"10.1016/j.cpc.2026.110026","url":null,"abstract":"<div><div>The version <span>1.2</span> of the <span>HepLib</span> (a <span>C++</span> <span>Lib</span>rary for computations in <span>H</span>igh <span>E</span>nergy <span>P</span>hysics) is presented. <span>HepLib</span> builds on top of other well-established libraries or programs, including <span>GINAC</span>, <span>FLINT</span>, <span>FORM</span>, <span>FIRE</span>, <em>etc.</em>, its first version has been released in Comput. Phys. Commun. <strong>265</strong>, 107,982 (2021). Here we provide another minor upgraded version <span>1.2</span>, in which the internal depended libraries or programs are updated to their latest versions, several bugs are fixed, many functional performances are improved, and lots of new features are also introduced. We also carry out experimental tests on the program <span>FIRE</span>, employing <span>FLINT</span> to enhance its performance with multivariate polynomials in the integrate-by-parts (IBP) reduction.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110026"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-15DOI: 10.1016/j.cpc.2026.110027
Deniz Elbek , Fatih Taşyaran , Bora Uçar , Kamer Kaya
<div><div>The <em>permanent</em> is a function, defined for a square matrix, with applications in various domains including quantum computing, statistical physics, complexity theory, combinatorics, and graph theory. Its formula is similar to that of the determinant; however, unlike the determinant, its exact computation is #P-complete, i.e., there is no algorithm to compute the permanent in polynomial time unless P=NP. For an <em>n</em> × <em>n</em> matrix, the fastest algorithm has a time complexity of <span><math><mrow><mi>O</mi><mo>(</mo><msup><mn>2</mn><mrow><mi>n</mi><mo>−</mo><mn>1</mn></mrow></msup><mi>n</mi><mo>)</mo></mrow></math></span>. Although supercomputers have been employed for permanent computation before, there is no work and, more importantly, no publicly available software that leverages cutting-edge High-Performance Computing accelerators such as GPUs. In this work, we design, develop, and investigate the performance of <span>SUperman</span>, a complete software suite that can compute matrix permanents on multiple nodes/GPUs on a cluster while handling various matrix types, e.g., real/complex/binary and sparse/dense, etc., with a unique treatment for each type. <span>SUperman</span> run on a single Nvidia A100 GPU is up to 86 × faster than a state-of-the-art parallel algorithm on 44 Intel Xeon cores running at 2.10GHz. Leveraging 192 GPUs, <span>SUperman</span> computes the permanent of a 62 × 62 matrix in 1.63 days, marking the largest reported permanent computation to date.</div><div>PROGRAM SUMMARY</div><div><em>Program Title:</em> <span>SUperman</span></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/5fhxcvfmrw.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/SU-HPC/superman</span><svg><path></path></svg></span></div><div><em>Licensing provisions(please choose one):</em> MIT</div><div><em>Programming language:</em> <span>C++</span>, <span>CUDA</span></div><div><em>Nature of problem:</em></div><div>The permanent plays a crucial role in various fields such as quantum computing, statistical physics, combinatorics, and graph theory. Unlike the determinant, computing the permanent is #P-complete [1] and its exact computation has exponential complexity. Even the fastest known algorithms require time that grows exponentially with matrix dimensions, making the problem computationally intractable for large matrices. The state-of-the-art tools leverage supercomputers [2, 3], but there remains a notable gap in publicly available software that exploits modern High-Performance Computing accelerators, such as GPUs. This limitation makes the researchers who require efficient and scalable methods for permanent computation suffer, particularly when dealing with various matrix types (real, complex, binary, sparse, dense) in practical applications.</div><div><em>Solution method:</em> <span>SUperman</span> is a complete open-sourc
{"title":"SUperman: Efficient permanent computation on GPUs","authors":"Deniz Elbek , Fatih Taşyaran , Bora Uçar , Kamer Kaya","doi":"10.1016/j.cpc.2026.110027","DOIUrl":"10.1016/j.cpc.2026.110027","url":null,"abstract":"<div><div>The <em>permanent</em> is a function, defined for a square matrix, with applications in various domains including quantum computing, statistical physics, complexity theory, combinatorics, and graph theory. Its formula is similar to that of the determinant; however, unlike the determinant, its exact computation is #P-complete, i.e., there is no algorithm to compute the permanent in polynomial time unless P=NP. For an <em>n</em> × <em>n</em> matrix, the fastest algorithm has a time complexity of <span><math><mrow><mi>O</mi><mo>(</mo><msup><mn>2</mn><mrow><mi>n</mi><mo>−</mo><mn>1</mn></mrow></msup><mi>n</mi><mo>)</mo></mrow></math></span>. Although supercomputers have been employed for permanent computation before, there is no work and, more importantly, no publicly available software that leverages cutting-edge High-Performance Computing accelerators such as GPUs. In this work, we design, develop, and investigate the performance of <span>SUperman</span>, a complete software suite that can compute matrix permanents on multiple nodes/GPUs on a cluster while handling various matrix types, e.g., real/complex/binary and sparse/dense, etc., with a unique treatment for each type. <span>SUperman</span> run on a single Nvidia A100 GPU is up to 86 × faster than a state-of-the-art parallel algorithm on 44 Intel Xeon cores running at 2.10GHz. Leveraging 192 GPUs, <span>SUperman</span> computes the permanent of a 62 × 62 matrix in 1.63 days, marking the largest reported permanent computation to date.</div><div>PROGRAM SUMMARY</div><div><em>Program Title:</em> <span>SUperman</span></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/5fhxcvfmrw.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/SU-HPC/superman</span><svg><path></path></svg></span></div><div><em>Licensing provisions(please choose one):</em> MIT</div><div><em>Programming language:</em> <span>C++</span>, <span>CUDA</span></div><div><em>Nature of problem:</em></div><div>The permanent plays a crucial role in various fields such as quantum computing, statistical physics, combinatorics, and graph theory. Unlike the determinant, computing the permanent is #P-complete [1] and its exact computation has exponential complexity. Even the fastest known algorithms require time that grows exponentially with matrix dimensions, making the problem computationally intractable for large matrices. The state-of-the-art tools leverage supercomputers [2, 3], but there remains a notable gap in publicly available software that exploits modern High-Performance Computing accelerators, such as GPUs. This limitation makes the researchers who require efficient and scalable methods for permanent computation suffer, particularly when dealing with various matrix types (real, complex, binary, sparse, dense) in practical applications.</div><div><em>Solution method:</em> <span>SUperman</span> is a complete open-sourc","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110027"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-27DOI: 10.1016/j.cpc.2025.110009
Simone Bnà , Giuseppe Giaquinto , Ettore Fadiga , Tommaso Zanelli , Francesco Bottau
High Performance Computing (HPC) on hybrid clusters represents a significant opportunity for Computational Fluid Dynamics (CFD), especially when modern accelerators are utilized effectively. However, despite the widespread adoption of GPUs, programmability remains a challenge, particularly in open-source contexts. In this paper, we present SPUMA, a full GPU porting of OPENFOAM® targeting NVIDIA and AMD GPUs. The implementation strategy is based on a portable programming model and the adoption of a memory pool manager that leverages the unified memory feature of modern GPUs. This approach is discussed alongside several numerical tests conducted on two pre-exascale clusters in Europe, LUMI and Leonardo, which host AMD MI250X and NVIDIA A100 GPUs, respectively. In the performance analysis section, we present results related to memory usage profiling and kernel wall-time, the impact of the memory pool, and energy consumption obtained by simulating the well-known DrivAer industrial test case. GPU utilization strongly affects strong scalability results, reaching 65% efficiency on both LUMI and Leonardo when approaching a load of 8 million cells per GPU. Weak scalability results, obtained on 20 GPUs with the OpenFOAM native multigrid solver, range from 75% on Leonardo to 85% on LUMI. Notably, efficiency is no lower than 90% when switching to the NVIDIA AmgX linear algebra solver. Our tests also reveal that one A100 GPU on Leonardo is equivalent 200–300 Intel Sapphire Rapids cores, provided the GPUs are sufficiently oversubscribed (more than 10 million of cells per GPU). Finally, energy consumption is reduced by up to 82% compared to analogous simulations executed on CPUs.
{"title":"SPUMA: A minimally invasive approach to the GPU porting of OPENFOAM®","authors":"Simone Bnà , Giuseppe Giaquinto , Ettore Fadiga , Tommaso Zanelli , Francesco Bottau","doi":"10.1016/j.cpc.2025.110009","DOIUrl":"10.1016/j.cpc.2025.110009","url":null,"abstract":"<div><div>High Performance Computing (HPC) on hybrid clusters represents a significant opportunity for Computational Fluid Dynamics (CFD), especially when modern accelerators are utilized effectively. However, despite the widespread adoption of GPUs, programmability remains a challenge, particularly in open-source contexts. In this paper, we present SPUMA, a full GPU porting of OPENFOAM® targeting NVIDIA and AMD GPUs. The implementation strategy is based on a portable programming model and the adoption of a memory pool manager that leverages the unified memory feature of modern GPUs. This approach is discussed alongside several numerical tests conducted on two pre-exascale clusters in Europe, LUMI and Leonardo, which host AMD MI250X and NVIDIA A100 GPUs, respectively. In the performance analysis section, we present results related to memory usage profiling and kernel wall-time, the impact of the memory pool, and energy consumption obtained by simulating the well-known DrivAer industrial test case. GPU utilization strongly affects strong scalability results, reaching 65% efficiency on both LUMI and Leonardo when approaching a load of 8 million cells per GPU. Weak scalability results, obtained on 20 GPUs with the OpenFOAM native multigrid solver, range from 75% on Leonardo to 85% on LUMI. Notably, efficiency is no lower than 90% when switching to the NVIDIA AmgX linear algebra solver. Our tests also reveal that one A100 GPU on Leonardo is equivalent 200–300 Intel Sapphire Rapids cores, provided the GPUs are sufficiently oversubscribed (more than 10 million of cells per GPU). Finally, energy consumption is reduced by up to 82% compared to analogous simulations executed on CPUs.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110009"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-18DOI: 10.1016/j.cpc.2026.110041
Saajid Chowdhury, Jesús Pérez-Ríos
We present a MATLAB script, atomiongpu.m, which can use GPU parallelization to run several million independent simulations per day of a trapped ion interacting with a low-density cloud of atoms, calculating classical trajectories of a trapped ion and an atom starting far away. The script uses ode45gpu, which is our optimized and specialized implementation of the Runge-Kutta algorithm used in MATLAB’s ODE solver ode45. We first discuss the physical system and show how ode45gpu can, on a CPU, solve it about 7x faster than MATLAB’s ode45, leading to a 600x-3500x speedup when running a million trajectories using ode45gpu in parallel on a GPU compared to ode45 on a CPU. Then, we show how to easily modify the inputs to atomiongpu.m to account for different kinds of atoms, ions, atom-ion interactions, trap potentials, simulation parameters, initial conditions, and computational hardware, so that atomiongpu.m automatically finds the probability of complex formation, the distribution of observables such as the scattering angle and complex lifetime, and plots of specific trajectories.
PROGRAM SUMMARY
Program Title:atomiongpu.m
CPC Library link to program files:https://doi.org/10.17632/sjw4hzw9jx.1
Nature of problem: Simulate classical dynamics (Newton’s laws) of an ion and atom, with up to several million different sets of initial conditions, store the final conditions and a few other scalar observables and their distributions, and plot specific trajectories.
Solution method: Implementing the algorithm behind ode45, MATLAB’s fourth/fifth-order adaptive-timestep Runge-Kutta method for propagating ordinary differential equations, we write a single, self-contained function, ode45gpu. Then, we use MATLAB’s arrayfun to parallelize it on multiple CPUs or GPUs. Finally, we wrote the wrapper script atomiongpu.m for quickly and conveniently using ode45gpu.
Additional comments: The source code for atomiongpu.m, ode45gpu, and figures can be found on the repository.
{"title":"GPU-parallelized MATLAB software for atom-ion dynamics","authors":"Saajid Chowdhury, Jesús Pérez-Ríos","doi":"10.1016/j.cpc.2026.110041","DOIUrl":"10.1016/j.cpc.2026.110041","url":null,"abstract":"<div><div>We present a MATLAB script, <span>atomiongpu.m</span>, which can use GPU parallelization to run several million independent simulations per day of a trapped ion interacting with a low-density cloud of atoms, calculating classical trajectories of a trapped ion and an atom starting far away. The script uses <span>ode45gpu</span>, which is our optimized and specialized implementation of the Runge-Kutta algorithm used in MATLAB’s ODE solver <span>ode45</span>. We first discuss the physical system and show how <span>ode45gpu</span> can, on a CPU, solve it about 7x faster than MATLAB’s <span>ode45</span>, leading to a 600x-3500x speedup when running a million trajectories using <span>ode45gpu</span> in parallel on a GPU compared to <span>ode45</span> on a CPU. Then, we show how to easily modify the inputs to <span>atomiongpu.m</span> to account for different kinds of atoms, ions, atom-ion interactions, trap potentials, simulation parameters, initial conditions, and computational hardware, so that <span>atomiongpu.m</span> automatically finds the probability of complex formation, the distribution of observables such as the scattering angle and complex lifetime, and plots of specific trajectories.</div></div><div><h3>PROGRAM SUMMARY</h3><div><em>Program Title:</em> <span>atomiongpu.m</span></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/sjw4hzw9jx.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/saajidchowdhury/supplementGPU</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> CC0 1.0</div><div><em>Programming language:</em> MATLAB R2023a, with Parallel Computing Toolbox installed</div><div><em>Supplementary material:</em> <span><span>https://github.com/saajidchowdhury/supplementGPU</span><svg><path></path></svg></span></div><div><em>Nature of problem:</em> Simulate classical dynamics (Newton’s laws) of an ion and atom, with up to several million different sets of initial conditions, store the final conditions and a few other scalar observables and their distributions, and plot specific trajectories.</div><div><em>Solution method:</em> Implementing the algorithm behind <span>ode45</span>, MATLAB’s fourth/fifth-order adaptive-timestep Runge-Kutta method for propagating ordinary differential equations, we write a single, self-contained function, <span>ode45gpu</span>. Then, we use MATLAB’s <span>arrayfun</span> to parallelize it on multiple CPUs or GPUs. Finally, we wrote the wrapper script <span>atomiongpu.m</span> for quickly and conveniently using <span>ode45gpu</span>.</div><div><em>Additional comments:</em> The source code for <span>atomiongpu.m</span>, <span>ode45gpu</span>, and figures can be found on the repository.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110041"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-13DOI: 10.1016/j.cpc.2026.110028
José Alfonso Pinzón Escobar , Markus Mühlhäußer , Hans-Joachim Bungartz , Philipp Neumann
In this work, algorithms for the parallel computation of three-body interactions in molecular dynamics are developed. While traversals for the computation of pair interactions are readily available in the literature, here, such traversals are extended to allow for the computation between molecules stored across three cells. A general framework for the computation of three-body interactions in linked cells is described, and then used to implement the corresponding traversals. In addition, our analysis is combined with the commonly used cutoff conditions, because they influence the total workload of the computation of interactions. The combinations between traversals and truncation conditions are validated using the well-known Lennard-Jones fluid. Validation case studies are taken from the literature and configured into homogeneous and inhomogeneous scenarios. Finally, strong scalability and performance in terms of molecule updates are measured at node-level.
{"title":"Linked cell traversal algorithms for three-Body interactions in molecular dynamics","authors":"José Alfonso Pinzón Escobar , Markus Mühlhäußer , Hans-Joachim Bungartz , Philipp Neumann","doi":"10.1016/j.cpc.2026.110028","DOIUrl":"10.1016/j.cpc.2026.110028","url":null,"abstract":"<div><div>In this work, algorithms for the parallel computation of three-body interactions in molecular dynamics are developed. While traversals for the computation of pair interactions are readily available in the literature, here, such traversals are extended to allow for the computation between molecules stored across three cells. A general framework for the computation of three-body interactions in linked cells is described, and then used to implement the corresponding traversals. In addition, our analysis is combined with the commonly used cutoff conditions, because they influence the total workload of the computation of interactions. The combinations between traversals and truncation conditions are validated using the well-known Lennard-Jones fluid. Validation case studies are taken from the literature and configured into homogeneous and inhomogeneous scenarios. Finally, strong scalability and performance in terms of molecule updates are measured at node-level.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110028"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-08DOI: 10.1016/j.cpc.2026.110020
Ekaterina A. Tarasevich , Maxim G. Gladush
Theoretical methods based on the density matrix provide powerful tools for describing open quantum systems. However, such methods are complicated and intricate to be used analytically. Here we present an object-oriented framework for constructing the equation of motion of the correlation matrix at a given order within the quantum BBGKY hierarchy, which is widely used to describe the interaction of many-particle systems. The algorithm of machine derivation of equations includes the implementation of the principles of quantum mechanics and operator algebra. It is based on the description and use of classes in the Python programming environment. Class objects correspond to the elements of the equations that are derived: density matrix, correlation matrix, energy operators, commutator and several operators indexing systems. The program contains a special class that allows one to define a statistical ensemble with an infinite number of subsystems. For all classes, methods implementing the actions of the operator algebra are specified. The number of subsystems of the statistical ensemble for the physical problem and the types of subsystems between which pairwise interactions are possible are specified as an input parameters. It is shown that this framework allows one to derive the equations of motion of the fourth-order correlation matrix in less than one minute.
Program summary
Program title: Program for symbolic generation of kinetic equations in quantum Bogolyubov hierarchies (BBGKY).
CPC Library link to program files: https://doi.org/10.17632/f97bwbypfd.1
Licensing provisions: GNU General Public License 3
Programming language: Python 3.10
Nature of problem: Construction of Bogolyubov hierarchies for reduced many-particle density matrices and correlation matrices is a powerful tool for solving problems in Physics. However, the analytical derivation of equations requires considerable time and effort to avoid multiple errors. Bogolyubov hierarchies for problems in quantum optics is a novel approach and requires special attention.
Solution method: In order to solve this problem, we used object-oriented programming. Each element of quantum-mechanical object (operator, density matrix, correlation matrix, etc.) is assigned to a class with specified attributes and methods. The attributes and methods of each class represent operations of quantum-mechanical algebra. This allows one to perform all the necessary operations on a computer and significantly reduce the time to obtain error-free output.
{"title":"Object-oriented programming as a tool for constructing high-order quantum-kinetic BBGKY equations","authors":"Ekaterina A. Tarasevich , Maxim G. Gladush","doi":"10.1016/j.cpc.2026.110020","DOIUrl":"10.1016/j.cpc.2026.110020","url":null,"abstract":"<div><div>Theoretical methods based on the density matrix provide powerful tools for describing open quantum systems. However, such methods are complicated and intricate to be used analytically. Here we present an object-oriented framework for constructing the equation of motion of the correlation matrix at a given order within the quantum BBGKY hierarchy, which is widely used to describe the interaction of many-particle systems. The algorithm of machine derivation of equations includes the implementation of the principles of quantum mechanics and operator algebra. It is based on the description and use of classes in the Python programming environment. Class objects correspond to the elements of the equations that are derived: density matrix, correlation matrix, energy operators, commutator and several operators indexing systems. The program contains a special class that allows one to define a statistical ensemble with an infinite number of subsystems. For all classes, methods implementing the actions of the operator algebra are specified. The number of subsystems of the statistical ensemble for the physical problem and the types of subsystems between which pairwise interactions are possible are specified as an input parameters. It is shown that this framework allows one to derive the equations of motion of the fourth-order correlation matrix in less than one minute.</div><div><strong>Program summary</strong></div><div><em>Program title</em>: Program for symbolic generation of kinetic equations in quantum Bogolyubov hierarchies (BBGKY).</div><div><em>CPC Library link to program files</em>: <span><span>https://doi.org/10.17632/f97bwbypfd.1</span><svg><path></path></svg></span></div><div><em>Licensing provisions</em>: GNU General Public License 3</div><div><em>Programming language</em>: Python 3.10</div><div><em>Nature of problem</em>: Construction of Bogolyubov hierarchies for reduced many-particle density matrices and correlation matrices is a powerful tool for solving problems in Physics. However, the analytical derivation of equations requires considerable time and effort to avoid multiple errors. Bogolyubov hierarchies for problems in quantum optics is a novel approach and requires special attention.</div><div><em>Solution method</em>: In order to solve this problem, we used object-oriented programming. Each element of quantum-mechanical object (operator, density matrix, correlation matrix, etc.) is assigned to a class with specified attributes and methods. The attributes and methods of each class represent operations of quantum-mechanical algebra. This allows one to perform all the necessary operations on a computer and significantly reduce the time to obtain error-free output.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110020"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146073663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-07DOI: 10.1016/j.cpc.2026.110022
M. Woo, G. Jo, B.H. Park, A.Y. Aydemir, J.-H Kim
This paper presents a novel method for calculating the first, second, and third derivatives of the equilibrium poloidal flux in different directions in tokamaks. The method is implemented in a new code called Equilibrium Derivative in Arbitrary Mesh (EDAM) which is designed for practical fusion applications. The spectral method is adopted along the boundary with evenly spaced angles, while unstructured triangular meshes are used inside the computational domain. A new boundary integral equation (BIE) is derived and solved numerically to obtain the first and higher-order derivatives at the boundary. Using GS equation, linear partial differential equations for the first and higher-order flux derivatives are then constructed and solved. Validation is performed using an analytical equilibrium constructed by Cicogna, which describes D-shaped plasmas with steep profiles near the boundary. The code demonstrates similar convergence rates for the first and higher-order derivatives, achieving second order accuracy. This new method has significant potential for practical fusion simulations, providing derivatives up to the third order with the required accuracy and precisely given values at any nodal points of the unstructured mesh.
{"title":"Accurate calculation of the gradients of the equilibrium poloidal flux in tokamaks","authors":"M. Woo, G. Jo, B.H. Park, A.Y. Aydemir, J.-H Kim","doi":"10.1016/j.cpc.2026.110022","DOIUrl":"10.1016/j.cpc.2026.110022","url":null,"abstract":"<div><div>This paper presents a novel method for calculating the first, second, and third derivatives of the equilibrium poloidal flux in different directions in tokamaks. The method is implemented in a new code called Equilibrium Derivative in Arbitrary Mesh (EDAM) which is designed for practical fusion applications. The spectral method is adopted along the boundary with evenly spaced angles, while unstructured triangular meshes are used inside the computational domain. A new boundary integral equation (BIE) is derived and solved numerically to obtain the first and higher-order derivatives at the boundary. Using GS equation, linear partial differential equations for the first and higher-order flux derivatives are then constructed and solved. Validation is performed using an analytical equilibrium constructed by Cicogna, which describes D-shaped plasmas with steep profiles near the boundary. The code demonstrates similar convergence rates for the first and higher-order derivatives, achieving second order accuracy. This new method has significant potential for practical fusion simulations, providing derivatives up to the third order with the required accuracy and precisely given values at any nodal points of the unstructured mesh.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110022"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-09DOI: 10.1016/j.cpc.2025.110014
David A. Bonhommeau
<div><div>DynHeMat is a parallel program aimed at modeling the dynamics of pure and doped helium nanodroplets (HNDs) by means of zero-point averaged dynamics (ZPAD), a method where the quantum nature of helium atoms is taken into account through the use of a He-He pseudopotential which includes zero-point effects of helium clusters on an average manner. Three He-He pseudopotentials, defined for applications in different contexts, are implemented. Large HNDs can be formed by successive coalescences of smaller HNDs keeping in mind that, depending on the HND size and He-He pseudopotential in use, the liquid character of the HND is more or less pronounced. Files containing the positions and velocities of HNDs formed with the three aforementioned He-He pseudopotentials are collected in a local databank, called ZPAD_DB. ZPAD simulations can be carried out at constant energy or temperature, then enabling the user to investigate collision, coagulation or submersion processes in pure or doped HNDs. Impurities can be rare-gas atoms (Ne, Ar, Kr, Xe and Rn), alkali atoms (Li, Na, K, Rb, Cs), or homogeneous clusters composed of such atoms. The program provides information on trajectories, namely positions, velocities, energies, radial distribution functions, and the initial distribution of HND surface atoms. Extension to other impurities or He-He pseudopotentials is made possible by the current structure of the program and keyword system.</div><div>PROGRAM SUMMARY</div><div><em>Program title</em>: DynHeMat</div><div><em>CPC Library link to program files</em>: <span><span>https://doi.org/10.17632/3hrfykstvr.1</span><svg><path></path></svg></span></div><div><em>Licensing provisions</em>: GNU General Public License 3 (GPL)</div><div><em>Programming language</em>: Fortran 90</div><div><em>Nature of problem</em>:</div><div>Helium nanodroplets (HNDs) are large quantum systems containing from a few thousands to billion atoms. The more quantum approaches, like quantum Monte Carlo, time-dependent density functional theory and path integral molecular dynamics, are often limited to the treatment of a few hundreds or thousands atoms or to small statistics in terms of projectile velocities or impact parameters, for instance. On the contrary, a classical approach would enable simulations on larger systems provided that the quantum nature of helium atoms is included on an average manner in the calculation in order to ensure that the expected heliophilic or heliophobic nature of impurities can be maintained.</div><div><em>Solution method</em>:</div><div>The zero-point averaged dynamics (ZPAD) includes zero-point effects on an average manner in classical simulations through the use of an effective He-He potential, and possibly effective He-impurity potentials, which makes the HND liquid and drastically improves the agreement with quantum calculations compared to standard classical simulations. Initially used to tackle the fragmentation of rare-gas clusters embedded in HNDs io
DynHeMat是一个并行程序,旨在通过零点平均动力学(ZPAD)来模拟纯和掺杂氦纳米液滴(HNDs)的动力学,这种方法通过使用He-He伪势来考虑氦原子的量子性质,其中包括平均方式的氦团簇的零点效应。实现了为不同上下文中的应用程序定义的三个He-He伪势。大HND可以由较小HND的连续聚并形成,记住,根据HND的大小和所使用的He-He伪势,HND的液体特征或多或少明显。包含由上述三个He-He伪势形成的hnd的位置和速度的文件被收集在一个称为ZPAD_DB的本地数据库中。ZPAD模拟可以在恒定能量或温度下进行,然后使用户能够研究纯或掺杂HNDs中的碰撞、凝聚或浸入过程。杂质可以是稀有气体原子(Ne, Ar, Kr, Xe和Rn),碱原子(Li, Na, K, Rb, Cs),或由这些原子组成的均匀团簇。该程序提供了HND表面原子的轨迹信息,即位置、速度、能量、径向分布函数和初始分布。程序和关键字系统的当前结构使扩展到其他杂质或He-He伪势成为可能。程序摘要程序标题:DynHeMatCPC库链接到程序文件:https://doi.org/10.17632/3hrfykstvr.1Licensing条款:GNU通用公共许可证3 (GPL)编程语言:Fortran 90问题的性质:氦纳米液滴(HNDs)是包含从几千到数十亿原子的大型量子系统。更多的量子方法,如量子蒙特卡罗,时变密度泛函数理论和路径积分分子动力学,通常仅限于处理几百或几千个原子,或者在弹丸速度或冲击参数方面的小统计数据,例如。相反,经典方法可以在更大的系统上进行模拟,前提是氦原子的量子性质以平均方式包含在计算中,以确保杂质的预期亲日性或疏日性可以保持。求解方法:零点平均动力学(ZPAD)通过利用有效He-He势和可能的有效he -杂质势,在经典模拟中以平均方式包含零点效应,使HND成为液态,与标准经典模拟相比,大大提高了与量子计算的一致性。ZPAD方法最初用于解决电子碰撞电离的HNDs中嵌入的稀有气体团块的破碎问题,后来应用于研究液氦中稀有气体原子的凝聚以及碱原子与小HNDs的碰撞。在当前版本的程序中,杂质可以是稀有气体(从Ne到Rn)或碱原子(从Li到Cs)。该程序与MPI并行化,可以同时执行几个独立的轨迹,每个轨迹分布在一组CPU内核上,这允许在合理的时间内对大型HNDs (N ≈ 105)进行模拟。附加评论包括限制和不寻常的功能:该程序特别在x86和ARM架构下的操作系统linux RedHat 9和MacOS Ventura上进行了测试,并行运行的轨迹数量不应超过999。与TDDFT不同,ZPAD不考虑超流动性,量子涡旋不能用这种方法进行研究。此外,在运行ZPAD模拟来研究碰撞、凝聚或淹没过程之前,可能需要大量的时间来产生纯HNDs。为了弥补这个缺点,DynHeMat用户可以使用一个包含HeN(1000 ≤ N <; 90000)的XYZ文件(包括位置和速度)的数据库。氦纳米液滴内部离子团簇的破碎动力学建模:以He100Ne4+为例,D. Bonhommeau, P. T. Lake, Jr ., C. Le Quiniou, M. Lewerenz和N. Halberstadt, J. Chem。物理学报,26,051(2007)。DOI: 10.1063/1.25152252。离子掺杂氦纳米液滴的破碎:掺杂剂喷射机制的理论证据,D. Bonhommeau, M. Lewerenz和N. Halberstadt, J. chemistry。物理学报,128,054302(2008)。DOI: 10.1063/1.28231013。Ar4He1000在电子碰撞电离中的破碎动力学:离子喷射和俘获之间的竞争,N. Halberstadt和D. A. Bonhommeau, J. chemistry。物理学报,32(2),433(2020)。DOI: 10.1063/5.00093634。Arn+He1000的零点平均动力学:掺杂剂尺寸对势能面、质谱和破碎模式的影响,d.a.b onhommeau, chemistry。物理学报,550(2021)111307。DOI: 10.1016 / j.chemphys.2021.1113075。
{"title":"DynHeMat: A program for zero-point averaged dynamics of pure and doped helium nanodroplets","authors":"David A. Bonhommeau","doi":"10.1016/j.cpc.2025.110014","DOIUrl":"10.1016/j.cpc.2025.110014","url":null,"abstract":"<div><div>DynHeMat is a parallel program aimed at modeling the dynamics of pure and doped helium nanodroplets (HNDs) by means of zero-point averaged dynamics (ZPAD), a method where the quantum nature of helium atoms is taken into account through the use of a He-He pseudopotential which includes zero-point effects of helium clusters on an average manner. Three He-He pseudopotentials, defined for applications in different contexts, are implemented. Large HNDs can be formed by successive coalescences of smaller HNDs keeping in mind that, depending on the HND size and He-He pseudopotential in use, the liquid character of the HND is more or less pronounced. Files containing the positions and velocities of HNDs formed with the three aforementioned He-He pseudopotentials are collected in a local databank, called ZPAD_DB. ZPAD simulations can be carried out at constant energy or temperature, then enabling the user to investigate collision, coagulation or submersion processes in pure or doped HNDs. Impurities can be rare-gas atoms (Ne, Ar, Kr, Xe and Rn), alkali atoms (Li, Na, K, Rb, Cs), or homogeneous clusters composed of such atoms. The program provides information on trajectories, namely positions, velocities, energies, radial distribution functions, and the initial distribution of HND surface atoms. Extension to other impurities or He-He pseudopotentials is made possible by the current structure of the program and keyword system.</div><div>PROGRAM SUMMARY</div><div><em>Program title</em>: DynHeMat</div><div><em>CPC Library link to program files</em>: <span><span>https://doi.org/10.17632/3hrfykstvr.1</span><svg><path></path></svg></span></div><div><em>Licensing provisions</em>: GNU General Public License 3 (GPL)</div><div><em>Programming language</em>: Fortran 90</div><div><em>Nature of problem</em>:</div><div>Helium nanodroplets (HNDs) are large quantum systems containing from a few thousands to billion atoms. The more quantum approaches, like quantum Monte Carlo, time-dependent density functional theory and path integral molecular dynamics, are often limited to the treatment of a few hundreds or thousands atoms or to small statistics in terms of projectile velocities or impact parameters, for instance. On the contrary, a classical approach would enable simulations on larger systems provided that the quantum nature of helium atoms is included on an average manner in the calculation in order to ensure that the expected heliophilic or heliophobic nature of impurities can be maintained.</div><div><em>Solution method</em>:</div><div>The zero-point averaged dynamics (ZPAD) includes zero-point effects on an average manner in classical simulations through the use of an effective He-He potential, and possibly effective He-impurity potentials, which makes the HND liquid and drastically improves the agreement with quantum calculations compared to standard classical simulations. Initially used to tackle the fragmentation of rare-gas clusters embedded in HNDs io","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"321 ","pages":"Article 110014"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}