Pub Date : 2025-10-28DOI: 10.1016/j.cpc.2025.109914
Adalberto Perez , Saleh Rezaeiravesh , Yi Ju , Erwin Laure , Stefano Markidis , Philipp Schlatter
Turbulence data sets produced from computational fluid dynamics (CFD), especially from fine-resolved direct numerical simulations (DNS) and large eddy simulations (LES) of turbulent flows, tend to be very large due to high resolutions adopted to accurately resolve the smallest scales. While the computational capacity of high-performance computing (HPC) platforms has kept increasing, storage capacity has lagged to the point that more data is being produced than what can be efficiently managed. Among the several methods emerged to deal with this problem, an efficient technique is data compression. In this study, we present a proof of concept of a novel data compression approach that relies on Gaussian process regression (GPR) within a Bayesian framework to handle data sets in such a way that initially discarded information can be recovered a posteriori. The approach can be used to supplement existing compression algorithms with measures of uncertainty and we show that it can be applied to compress not only the 3D spatial fields of turbulence but also the discrete sets of time series data. The compression algorithm has been designed for data from spectral element method (SEM) simulations but can be extended to spatiotemporal fields obtained from other methods arising in engineering and physics. Our investigation shows that it is possible to use Gaussian process regression for data compression, however also highlights several of its limitations, in particular, that efficient implementations of GPR are crucial for its adoption, and that, while it is unlikely that the method can compete in terms of throughput with state of the art methods, given the cost of GPR, there is potential in terms of compression performance, as long as efficient bit-plane coding is integrated.
{"title":"Compression of turbulence time series data using Gaussian process regression","authors":"Adalberto Perez , Saleh Rezaeiravesh , Yi Ju , Erwin Laure , Stefano Markidis , Philipp Schlatter","doi":"10.1016/j.cpc.2025.109914","DOIUrl":"10.1016/j.cpc.2025.109914","url":null,"abstract":"<div><div>Turbulence data sets produced from computational fluid dynamics (CFD), especially from fine-resolved direct numerical simulations (DNS) and large eddy simulations (LES) of turbulent flows, tend to be very large due to high resolutions adopted to accurately resolve the smallest scales. While the computational capacity of high-performance computing (HPC) platforms has kept increasing, storage capacity has lagged to the point that more data is being produced than what can be efficiently managed. Among the several methods emerged to deal with this problem, an efficient technique is data compression. In this study, we present a proof of concept of a novel data compression approach that relies on Gaussian process regression (GPR) within a Bayesian framework to handle data sets in such a way that initially discarded information can be recovered a posteriori. The approach can be used to supplement existing compression algorithms with measures of uncertainty and we show that it can be applied to compress not only the 3D spatial fields of turbulence but also the discrete sets of time series data. The compression algorithm has been designed for data from spectral element method (SEM) simulations but can be extended to spatiotemporal fields obtained from other methods arising in engineering and physics. Our investigation shows that it is possible to use Gaussian process regression for data compression, however also highlights several of its limitations, in particular, that efficient implementations of GPR are crucial for its adoption, and that, while it is unlikely that the method can compete in terms of throughput with state of the art methods, given the cost of GPR, there is potential in terms of compression performance, as long as efficient bit-plane coding is integrated.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109914"},"PeriodicalIF":3.4,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145517654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28DOI: 10.1016/j.cpc.2025.109911
Vahid Azimi-Mousolou , Davoud Mirzaei
The classical Landau-Lifshitz-Gilbert (LLG) equation has long served as a cornerstone for modeling magnetization dynamics in magnetic systems, yet its classical nature limits its applicability to inherently quantum phenomena such as entanglement and nonlocal correlations. Inspired by the need to incorporate quantum effects into spin dynamics, recently a quantum generalization of the LLG equation is proposed [Phys. Rev. Lett. 133, 266704 (2024)] which captures essential quantum behavior in many-body systems. In this work, we develop a robust numerical methodology tailored to this quantum LLG framework that not only handles the complexity of quantum many-body systems but also preserves the intrinsic mathematical structures and physical properties dictated by the equation. We apply the proposed method to a class of quantum systems with a moderate number of spins that host host topological states of matter, and demonstrate rich quantum behavior, including the emergence of long-time entangled states. This approach opens a pathway toward reliable simulations of quantum magnetism beyond classical approximations, potentially leading to new discoveries.
经典的Landau-Lifshitz-Gilbert (LLG)方程长期以来一直是磁性系统磁化动力学建模的基础,但其经典性质限制了其对固有量子现象(如纠缠和非局部相关)的适用性。受需要将量子效应纳入自旋动力学的启发,最近提出了LLG方程的量子推广[物理学]。Rev. Lett. 133, 266704(2024)],它捕获了多体系统中的基本量子行为。在这项工作中,我们开发了一种针对这种量子LLG框架的强大数值方法,该方法不仅处理量子多体系统的复杂性,而且保留了由方程决定的固有数学结构和物理性质。我们将所提出的方法应用于一类具有中等数量自旋的量子系统,该系统具有物质的主拓扑状态,并展示了丰富的量子行为,包括长时间纠缠态的出现。这种方法为量子磁性的可靠模拟开辟了一条途径,超越了经典的近似,有可能导致新的发现。
{"title":"Numerical solution of quantum Landau-Lifshitz-Gilbert equation","authors":"Vahid Azimi-Mousolou , Davoud Mirzaei","doi":"10.1016/j.cpc.2025.109911","DOIUrl":"10.1016/j.cpc.2025.109911","url":null,"abstract":"<div><div>The classical Landau-Lifshitz-Gilbert (LLG) equation has long served as a cornerstone for modeling magnetization dynamics in magnetic systems, yet its classical nature limits its applicability to inherently quantum phenomena such as entanglement and nonlocal correlations. Inspired by the need to incorporate quantum effects into spin dynamics, recently a quantum generalization of the LLG equation is proposed [Phys. Rev. Lett. 133, 266704 (2024)] which captures essential quantum behavior in many-body systems. In this work, we develop a robust numerical methodology tailored to this quantum LLG framework that not only handles the complexity of quantum many-body systems but also preserves the intrinsic mathematical structures and physical properties dictated by the equation. We apply the proposed method to a class of quantum systems with a moderate number of spins that host host topological states of matter, and demonstrate rich quantum behavior, including the emergence of long-time entangled states. This approach opens a pathway toward reliable simulations of quantum magnetism beyond classical approximations, potentially leading to new discoveries.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109911"},"PeriodicalIF":3.4,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.cpc.2025.109916
Jan Habscheid, Satyvir Singh, Lambert Theisen, Stefanie Braun, Manuel Torrilhon
In this study, we present a finite element solver for a thermodynamically consistent electrolyte model that accurately captures multicomponent ionic transport by incorporating key physical phenomena such as steric effects, solvation, and pressure coupling. The model is rooted in the principles of non-equilibrium thermodynamics and strictly enforces mass conservation, charge neutrality, and entropy production. It extends beyond classical frameworks like the Nernst–Planck system by employing modified partial mass balances, the electrostatic Poisson equation, and a momentum balance expressed in terms of electrostatic potential, atomic fractions, and pressure, thereby enhancing numerical stability and physical consistency. Implemented using the FEniCSx platform, the solver efficiently handles one- and two-dimensional problems with varied boundary conditions and demonstrates excellent convergence behavior and robustness. Validation against benchmark problems confirms its improved physical fidelity, particularly in regimes characterized by high ionic concentrations and strong electrochemical gradients. Simulation results reveal critical electrolyte phenomena, including electric double layer formation, rectification behavior, and the effects of solvation number, Debye length, and compressibility. The solver’s modular variational formulation facilitates its extension to complex electrochemical systems involving multiple ionic species with asymmetric valences. We publicly provide the documented and validated solver framework.
{"title":"A finite element solver for a thermodynamically consistent electrolyte model","authors":"Jan Habscheid, Satyvir Singh, Lambert Theisen, Stefanie Braun, Manuel Torrilhon","doi":"10.1016/j.cpc.2025.109916","DOIUrl":"10.1016/j.cpc.2025.109916","url":null,"abstract":"<div><div>In this study, we present a finite element solver for a thermodynamically consistent electrolyte model that accurately captures multicomponent ionic transport by incorporating key physical phenomena such as steric effects, solvation, and pressure coupling. The model is rooted in the principles of non-equilibrium thermodynamics and strictly enforces mass conservation, charge neutrality, and entropy production. It extends beyond classical frameworks like the Nernst–Planck system by employing modified partial mass balances, the electrostatic Poisson equation, and a momentum balance expressed in terms of electrostatic potential, atomic fractions, and pressure, thereby enhancing numerical stability and physical consistency. Implemented using the FEniCSx platform, the solver efficiently handles one- and two-dimensional problems with varied boundary conditions and demonstrates excellent convergence behavior and robustness. Validation against benchmark problems confirms its improved physical fidelity, particularly in regimes characterized by high ionic concentrations and strong electrochemical gradients. Simulation results reveal critical electrolyte phenomena, including electric double layer formation, rectification behavior, and the effects of solvation number, Debye length, and compressibility. The solver’s modular variational formulation facilitates its extension to complex electrochemical systems involving multiple ionic species with asymmetric valences. We publicly provide the documented and validated solver framework.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109916"},"PeriodicalIF":3.4,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-26DOI: 10.1016/j.cpc.2025.109900
B. Thorpe , M.J. Smith , P.J. Hasnip , N.D. Drummond
We describe how quantum Monte Carlo calculations using the CASINO software can be accelerated using graphics processing units (GPUs) and OpenACC. In particular we consider offloading Ewald summation, the evaluation of long-range two-body terms in the Jastrow correlation factor, and the evaluation of orbitals in a blip basis set. We present results for three- and two-dimensional homogeneous electron gases and ab initio simulations of bulk materials, showing that significant speedups of up to a factor of 2.5 can be achieved by the use of GPUs when several hundred particles are included in the simulations. The use of single-precision arithmetic can improve the speedup further without significant detriment to the accuracy of the calculations.
{"title":"Acceleration of the CASINO quantum Monte Carlo software using graphics processing units and OpenACC","authors":"B. Thorpe , M.J. Smith , P.J. Hasnip , N.D. Drummond","doi":"10.1016/j.cpc.2025.109900","DOIUrl":"10.1016/j.cpc.2025.109900","url":null,"abstract":"<div><div>We describe how quantum Monte Carlo calculations using the CASINO software can be accelerated using graphics processing units (GPUs) and OpenACC. In particular we consider offloading Ewald summation, the evaluation of long-range two-body terms in the Jastrow correlation factor, and the evaluation of orbitals in a blip basis set. We present results for three- and two-dimensional homogeneous electron gases and <em>ab initio</em> simulations of bulk materials, showing that significant speedups of up to a factor of 2.5 can be achieved by the use of GPUs when several hundred particles are included in the simulations. The use of single-precision arithmetic can improve the speedup further without significant detriment to the accuracy of the calculations.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109900"},"PeriodicalIF":3.4,"publicationDate":"2025-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-26DOI: 10.1016/j.cpc.2025.109901
Wasim Niyaz Munshi , Marc Fehling , Wolfgang Bangerth , Chandrasekhar Annavarapu
Phase-field models for fracture have demonstrated significant power in simulating realistic fractures, including complex behaviors like crack branching, coalescing, and fragmentation. Despite this, these models have mostly remained in the realm of proof-of-concept studies rather than being applied to practical problems. This paper introduces a computationally efficient implementation of the phase-field method based on the open source finite element library deal.II, incorporating parallel computing and adaptive mesh refinement. We provide a detailed outline of the steps required to implement the phase field model in deal.II. We then validate our implementation through a benchmark 3D boundary value problem and finally demonstrate the computational capabilities by running field scale problems involving complicated fracture patterns in 3D. This open-source code offers a framework that enables engineers and researchers to simulate diffuse crack growth within a widely-used computational environment.
{"title":"A detailed guide to an open-source implementation of the hybrid phase field method for 3D fracture modeling in deal.II","authors":"Wasim Niyaz Munshi , Marc Fehling , Wolfgang Bangerth , Chandrasekhar Annavarapu","doi":"10.1016/j.cpc.2025.109901","DOIUrl":"10.1016/j.cpc.2025.109901","url":null,"abstract":"<div><div>Phase-field models for fracture have demonstrated significant power in simulating realistic fractures, including complex behaviors like crack branching, coalescing, and fragmentation. Despite this, these models have mostly remained in the realm of proof-of-concept studies rather than being applied to practical problems. This paper introduces a computationally efficient implementation of the phase-field method based on the open source finite element library <span>deal.II</span>, incorporating parallel computing and adaptive mesh refinement. We provide a detailed outline of the steps required to implement the phase field model in <span>deal.II</span>. We then validate our implementation through a benchmark 3D boundary value problem and finally demonstrate the computational capabilities by running field scale problems involving complicated fracture patterns in 3D. This open-source code offers a framework that enables engineers and researchers to simulate diffuse crack growth within a widely-used computational environment.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109901"},"PeriodicalIF":3.4,"publicationDate":"2025-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1016/j.cpc.2025.109908
Tianya Xia, Li Lin Yang
We propose a novel method for reconstructing Laurent expansion of rational functions using -adic numbers. By evaluating the rational functions in -adic fields rather than finite fields, it is possible to probe the expansion coefficients simultaneously, enabling their reconstruction from a single set of evaluations. Compared with the reconstruction of the full expression, constructing the Laurent expansion to the first few orders significantly reduces the required computational resources. Our method can handle expansions with respect to more than one variables simultaneously. Among possible applications, we anticipate that our method can be used to simplify the integration-by-parts reduction of Feynman integrals in cutting-edge calculations.
PROGRAM SUMMARYManuscript Title: Reconstructing Laurent expansion of rational functions using p-adic numbers
Authors: Tianya Xia, Li Lin Yang
Program Title: LaurentExpPadicReconstruct
CPC Library link to program files: (to be added by Technical Editor)
Licensing provisions: GPLv3
Programming language: C++
External routines/libraries: FireFly, FLINT
Nature of problem: Reconstructing Laurent expansion of rational function arising in the IBP reuduction of Feynman Integrals.
Solution method: Uses p-adic numbers combined with rational function reconstruction over finite fields.
Running time: Typically ranges from several minutes to a few hours, depending on the size and algebraic complexity of the input.
{"title":"Reconstructing Laurent expansion of rational functions using p-adic numbers","authors":"Tianya Xia, Li Lin Yang","doi":"10.1016/j.cpc.2025.109908","DOIUrl":"10.1016/j.cpc.2025.109908","url":null,"abstract":"<div><div>We propose a novel method for reconstructing Laurent expansion of rational functions using <span><math><mi>p</mi></math></span>-adic numbers. By evaluating the rational functions in <span><math><mi>p</mi></math></span>-adic fields rather than finite fields, it is possible to probe the expansion coefficients simultaneously, enabling their reconstruction from a single set of evaluations. Compared with the reconstruction of the full expression, constructing the Laurent expansion to the first few orders significantly reduces the required computational resources. Our method can handle expansions with respect to more than one variables simultaneously. Among possible applications, we anticipate that our method can be used to simplify the integration-by-parts reduction of Feynman integrals in cutting-edge calculations.</div><div><strong>PROGRAM SUMMARY</strong> <em>Manuscript Title:</em> Reconstructing Laurent expansion of rational functions using p-adic numbers</div><div><em>Authors:</em> Tianya Xia, Li Lin Yang</div><div><em>Program Title:</em> LaurentExpPadicReconstruct</div><div><em>CPC Library link to program files:</em> (to be added by Technical Editor)</div><div><em>Licensing provisions:</em> GPLv3</div><div><em>Programming language:</em> C++</div><div><em>External routines/libraries:</em> FireFly, FLINT</div><div><em>Nature of problem:</em> Reconstructing Laurent expansion of rational function arising in the IBP reuduction of Feynman Integrals.</div><div><em>Solution method:</em> Uses p-adic numbers combined with rational function reconstruction over finite fields.</div><div><em>Running time:</em> Typically ranges from several minutes to a few hours, depending on the size and algebraic complexity of the input.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109908"},"PeriodicalIF":3.4,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1016/j.cpc.2025.109915
Haocheng Wen , Faxuan Luo , Sheng Xu, Bing Wang
<div><div>The compressible reacting flow numerical solver is an essential tool in the study of combustion, energy disciplines, as well as in the design of industrial power and propulsion devices. We have established the first JAX-based (a Python library developed by Google for accelerator-oriented array computation and high-performance numerical computing) block-structured adaptive mesh refinement (AMR) framework, called JAX-AMR, and then developed a fully-differentiable solver for compressible reacting flows, named JANC. JANC is implemented in Python and features automatic differentiation capabilities, enabling an efficient integration of the solver with machine learning. Furthermore, benefited by multiple acceleration features such as accelerated linear algebra (XLA)-powered Just-In-Time (JIT) compilation, GPU/TPU computing, parallel computing, and AMR, the computational efficiency of JANC has been significantly improved. In a comparative test of a two-dimensional detonation tube case, the computational cost of the JANC core solver, running on a single A100 GPU, was reduced to 1% of that of OpenFOAM, which was parallelized across 384 CPU cores. When the AMR method is enabled for both solvers, JANC’s computational cost can be reduced to 1-2% of that of OpenFOAM. The core solver of JANC has also been tested for parallel computation on a 4-card A100 setup, demonstrating its convenient and efficient parallel computing capability. JANC also shows strong compatibility with machine learning by combining adjoint optimization to make the whole dynamic trajectory efficiently differentiable. JANC provides a new generation of high-performance, cost-effective, and high-precision solver framework for large-scale numerical simulations of compressible reacting flows and related machine learning research.</div><div>Program summary</div><div><em>Program title</em>: JAX-AMR and JANC</div><div><em>CPC Library link to program files</em>: <span><span>https://doi.org/10.17632/pkbxp5tm8w.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link</em>: <span><span>https://github.com/JA4S/JAX-AMR</span><svg><path></path></svg></span>, <span><span>https://github.com/JA4S/JANC</span><svg><path></path></svg></span></div><div>Licensing provisions: MIT</div><div>Programming language: Python</div><div><em>Nature of problem</em>: The numerical solution of compressible reactive flows plays a crucial role in combustion, energy utilization, and the design and manufacturing of propulsion systems. However, the multi-species nature, highly transient behavior, and strong numerical stiffness of reactive flows lead to significantly higher computational costs compared to conventional flow problems. In addition, conventional reactive flow solvers are typically built on Fortran or C++ frameworks, making them difficult to integrate with data-driven methods based on existing Python ecosystems—particularly gradient-based optimization techniques such as machine learning
{"title":"JANC: A cost-effective, differentiable compressible reacting flow solver featured with JAX-based adaptive mesh refinement","authors":"Haocheng Wen , Faxuan Luo , Sheng Xu, Bing Wang","doi":"10.1016/j.cpc.2025.109915","DOIUrl":"10.1016/j.cpc.2025.109915","url":null,"abstract":"<div><div>The compressible reacting flow numerical solver is an essential tool in the study of combustion, energy disciplines, as well as in the design of industrial power and propulsion devices. We have established the first JAX-based (a Python library developed by Google for accelerator-oriented array computation and high-performance numerical computing) block-structured adaptive mesh refinement (AMR) framework, called JAX-AMR, and then developed a fully-differentiable solver for compressible reacting flows, named JANC. JANC is implemented in Python and features automatic differentiation capabilities, enabling an efficient integration of the solver with machine learning. Furthermore, benefited by multiple acceleration features such as accelerated linear algebra (XLA)-powered Just-In-Time (JIT) compilation, GPU/TPU computing, parallel computing, and AMR, the computational efficiency of JANC has been significantly improved. In a comparative test of a two-dimensional detonation tube case, the computational cost of the JANC core solver, running on a single A100 GPU, was reduced to 1% of that of OpenFOAM, which was parallelized across 384 CPU cores. When the AMR method is enabled for both solvers, JANC’s computational cost can be reduced to 1-2% of that of OpenFOAM. The core solver of JANC has also been tested for parallel computation on a 4-card A100 setup, demonstrating its convenient and efficient parallel computing capability. JANC also shows strong compatibility with machine learning by combining adjoint optimization to make the whole dynamic trajectory efficiently differentiable. JANC provides a new generation of high-performance, cost-effective, and high-precision solver framework for large-scale numerical simulations of compressible reacting flows and related machine learning research.</div><div>Program summary</div><div><em>Program title</em>: JAX-AMR and JANC</div><div><em>CPC Library link to program files</em>: <span><span>https://doi.org/10.17632/pkbxp5tm8w.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link</em>: <span><span>https://github.com/JA4S/JAX-AMR</span><svg><path></path></svg></span>, <span><span>https://github.com/JA4S/JANC</span><svg><path></path></svg></span></div><div>Licensing provisions: MIT</div><div>Programming language: Python</div><div><em>Nature of problem</em>: The numerical solution of compressible reactive flows plays a crucial role in combustion, energy utilization, and the design and manufacturing of propulsion systems. However, the multi-species nature, highly transient behavior, and strong numerical stiffness of reactive flows lead to significantly higher computational costs compared to conventional flow problems. In addition, conventional reactive flow solvers are typically built on Fortran or C++ frameworks, making them difficult to integrate with data-driven methods based on existing Python ecosystems—particularly gradient-based optimization techniques such as machine learning","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109915"},"PeriodicalIF":3.4,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145517651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1016/j.cpc.2025.109906
Quoc Viet Tran , Tan Phuc Le , Vu Dong Tran , Ngoc Anh Nguyen , Quang Hung Nguyen
<div><div>In this work, we present the “EP code” (version 1.0), a user-friendly and robust computational tool. It computes the exact pairing eigenvalues and eigenvectors directly from the general nuclear pairing Hamiltonian, represented using SU(2) quasi-spin algebra with basis vectors in binary representation, at zero temperature for both odd and even deformed nucleon systems. In this initial release, the sparsity and symmetry of the pairing matrix are exploited for the first time to quickly construct the pairing matrix. The ARPACK and LAPACK packages are employed for the diagonalization of large- and small-scale sparse matrices, respectively. In addition, the calculation speed for odd-nucleon systems is significantly improved by employing a novel technique to accurately identify the block containing the ground state in such odd-nucleon configurations. To ensure the high numerical stability, the Kahan compensation algorithm is employed. The current version of the EP code can efficiently expand the computational space to handle up to 26 doubly folded (deformed) single-particle levels and 26 nucleons on a standard desktop computer in approximately 10<sup>2</sup> s with double precision. With sufficient computational resources, the code can process up to 63 deformed single-particle levels, which can accommodate from 1 to 63 nucleon pairs. The EP v1.0 code is also designed for future extensions, including the finite-temperature and parallel computations. <strong>PROGRAM SUMMARY</strong> <em>Program Title:</em> EP (v1.0) <em>CPC Library link to program files:</em> (<span><span>https://doi.org/10.17632/z3jzzmc9cw.1</span><svg><path></path></svg></span>) <em>Developer’s repository link:</em> <span><span>https://github.com/ifas-mathphys/epcode_v01</span><svg><path></path></svg></span> <em>Code Ocean capsule:</em> (to be added by Technical Editor) <em>Licensing provisions:</em> GPLv3 <em>Programming language:</em> Fortran <em>Supplementary material: Nature of problem:</em> Nature of problem: Exact computation of eigenvalues and eigenvectors via direct diagonalization of the general nuclear pairing Hamiltonian for both odd- and even-nucleon configurations at zero temperature, ensuring high accuracy, enhanced numerical stability, and reduced computational time. <em>Solution method:</em> Using the quasispin representation within the SU(2) algebra, we construct the pair-exchange matrix of the pairing Hamiltonian. To expedite the construction of the pairing matrix, we employ the binary representation to encode the information of paired states represented by matrix elements, while the reduction of computational time is achieved through the implementation of a sparse matrix diagonalization algorithm. An improved hash function, inspired by Ref. [1], is used to efficiently retrieve the indices of the pairing matrix, thereby speeding up its construction. A technique for determining the ground-state block in odd-nucleon configurations that significantly reduces the
{"title":"Exact nuclear pairing solution for large-scale configurations: I. The EP (v1.0) program at zero temperature","authors":"Quoc Viet Tran , Tan Phuc Le , Vu Dong Tran , Ngoc Anh Nguyen , Quang Hung Nguyen","doi":"10.1016/j.cpc.2025.109906","DOIUrl":"10.1016/j.cpc.2025.109906","url":null,"abstract":"<div><div>In this work, we present the “EP code” (version 1.0), a user-friendly and robust computational tool. It computes the exact pairing eigenvalues and eigenvectors directly from the general nuclear pairing Hamiltonian, represented using SU(2) quasi-spin algebra with basis vectors in binary representation, at zero temperature for both odd and even deformed nucleon systems. In this initial release, the sparsity and symmetry of the pairing matrix are exploited for the first time to quickly construct the pairing matrix. The ARPACK and LAPACK packages are employed for the diagonalization of large- and small-scale sparse matrices, respectively. In addition, the calculation speed for odd-nucleon systems is significantly improved by employing a novel technique to accurately identify the block containing the ground state in such odd-nucleon configurations. To ensure the high numerical stability, the Kahan compensation algorithm is employed. The current version of the EP code can efficiently expand the computational space to handle up to 26 doubly folded (deformed) single-particle levels and 26 nucleons on a standard desktop computer in approximately 10<sup>2</sup> s with double precision. With sufficient computational resources, the code can process up to 63 deformed single-particle levels, which can accommodate from 1 to 63 nucleon pairs. The EP v1.0 code is also designed for future extensions, including the finite-temperature and parallel computations. <strong>PROGRAM SUMMARY</strong> <em>Program Title:</em> EP (v1.0) <em>CPC Library link to program files:</em> (<span><span>https://doi.org/10.17632/z3jzzmc9cw.1</span><svg><path></path></svg></span>) <em>Developer’s repository link:</em> <span><span>https://github.com/ifas-mathphys/epcode_v01</span><svg><path></path></svg></span> <em>Code Ocean capsule:</em> (to be added by Technical Editor) <em>Licensing provisions:</em> GPLv3 <em>Programming language:</em> Fortran <em>Supplementary material: Nature of problem:</em> Nature of problem: Exact computation of eigenvalues and eigenvectors via direct diagonalization of the general nuclear pairing Hamiltonian for both odd- and even-nucleon configurations at zero temperature, ensuring high accuracy, enhanced numerical stability, and reduced computational time. <em>Solution method:</em> Using the quasispin representation within the SU(2) algebra, we construct the pair-exchange matrix of the pairing Hamiltonian. To expedite the construction of the pairing matrix, we employ the binary representation to encode the information of paired states represented by matrix elements, while the reduction of computational time is achieved through the implementation of a sparse matrix diagonalization algorithm. An improved hash function, inspired by Ref. [1], is used to efficiently retrieve the indices of the pairing matrix, thereby speeding up its construction. A technique for determining the ground-state block in odd-nucleon configurations that significantly reduces the","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109906"},"PeriodicalIF":3.4,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145576713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1016/j.cpc.2025.109907
Kayoung Ban , Myeonghun Park , Raymundo Ramos
We develop a machine learning algorithm to turn around stratification in Monte Carlo sampling. We use a different way to divide the domain space of the integrand, based on the height of the function being sampled, similar to what is done in Lebesgue integration. This means that isocontours of the function define regions that can have any shape depending on the behavior of the function. We take advantage of the capacity of neural networks to learn complicated functions in order to predict these complicated divisions and preclassify large samples of the domain space. From this preclassification, we can select the required number of points to perform a number of tasks such as variance reduction, integration and even event selection. The network ultimately defines the regions with what it learned and is also used to calculate the multi-dimensional volume of each region. Reference code with examples is publicly available on the web1.
{"title":"LeStrat-Net: Lebesgue style stratification for Monte Carlo simulations powered by machine learning","authors":"Kayoung Ban , Myeonghun Park , Raymundo Ramos","doi":"10.1016/j.cpc.2025.109907","DOIUrl":"10.1016/j.cpc.2025.109907","url":null,"abstract":"<div><div>We develop a machine learning algorithm to turn around stratification in Monte Carlo sampling. We use a different way to divide the domain space of the integrand, based on the height of the function being sampled, similar to what is done in Lebesgue integration. This means that isocontours of the function define regions that can have any shape depending on the behavior of the function. We take advantage of the capacity of neural networks to learn complicated functions in order to predict these complicated divisions and preclassify large samples of the domain space. From this preclassification, we can select the required number of points to perform a number of tasks such as variance reduction, integration and even event selection. The network ultimately defines the regions with what it learned and is also used to calculate the multi-dimensional volume of each region. Reference code with examples is publicly available on the web<span><span><sup>1</sup></span></span>.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109907"},"PeriodicalIF":3.4,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145414754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1016/j.cpc.2025.109904
Alexander Benedix Robles , Phil-Alexander Hofmann , Thomas Chuna , Tobias Dornheim , Michael Hecht
Path integral Monte Carlo (PIMC) simulations are a cornerstone method for studying quantum many-body systems, such as warm dense matter and ultracold atoms. The analytic continuation needed to estimate dynamic quantities from these simulations amounts to an inverse Laplace transform, which is an ill-conditioned problem. If this challenging problem were surmounted, dynamical observables such as the dynamic structure factor (DSF) —a key property e.g. in x-ray and neutron scattering experiments—could be extracted from the imaginary-time correlation functions estimates. Although of fundamental importance, the analytic continuation problem remains challenging due to its ill-posedness, and state-of-the-art techniques continue to deliver unsatisfactory results. To address this challenge, we express the DSF as a linear combination of kernel functions with known Laplace transforms that have been tailored to satisfy its physical constraints, e.g., detailed balance. Then we employ least-squares optimization regularized with a Bayesian prior estimate to determine the coefficients of this linear combination. We explore various regularization term, such as the commonly used entropic regularizer, as well as the uncommon -distance and CDF -distance. We also explore techniques for setting the regularization weight. A key outcome and contribution is the open-source package PyLIT (Python Laplace Inverse Transform), which leverages Numba for C-level performance, unifying the presented formulations. PyLIT’s core functionality is kernel construction and optimization. In our applications, we find that PyLIT’s DSF estimates share many qualitative features with other more established methods. Drawing from our insights, we identify three key findings. Firstly, independent of the regularization choice, utilizing non-uniform grid point distributions reduced the number of unknowns and thus reduced our space of possible solutions. Secondly, the -distance and CDF -distance, previously unexplored regularizers, benefit from their linear gradients and perform about as well as entriopic regularization. Thirdly, future work can meaningfully combine regularized and stochastic optimization.
{"title":"PyLIT: Reformulation and implementation of the analytic continuation problem using kernel representation methods","authors":"Alexander Benedix Robles , Phil-Alexander Hofmann , Thomas Chuna , Tobias Dornheim , Michael Hecht","doi":"10.1016/j.cpc.2025.109904","DOIUrl":"10.1016/j.cpc.2025.109904","url":null,"abstract":"<div><div>Path integral Monte Carlo (PIMC) simulations are a cornerstone method for studying quantum many-body systems, such as warm dense matter and ultracold atoms. The analytic continuation needed to estimate dynamic quantities from these simulations amounts to an inverse Laplace transform, which is an ill-conditioned problem. If this challenging problem were surmounted, dynamical observables such as the dynamic structure factor (DSF) <span><math><mrow><mi>S</mi><mo>(</mo><mi>q</mi><mo>,</mo><mi>ω</mi><mo>)</mo></mrow></math></span>—a key property e.g. in x-ray and neutron scattering experiments—could be extracted from the imaginary-time correlation functions <span><math><mrow><mi>F</mi><mo>(</mo><mi>q</mi><mo>,</mo><mi>τ</mi><mo>)</mo></mrow></math></span> estimates. Although of fundamental importance, the analytic continuation problem remains challenging due to its ill-posedness, and state-of-the-art techniques continue to deliver unsatisfactory results. To address this challenge, we express the DSF as a linear combination of kernel functions with known Laplace transforms that have been tailored to satisfy its physical constraints, <em>e.g.</em>, detailed balance. Then we employ least-squares optimization regularized with a Bayesian prior estimate to determine the coefficients of this linear combination. We explore various regularization term, such as the commonly used entropic regularizer, as well as the uncommon <span><math><msup><mi>L</mi><mn>2</mn></msup></math></span>-distance and CDF <span><math><msup><mi>L</mi><mn>2</mn></msup></math></span>-distance. We also explore techniques for setting the regularization weight. A key outcome and contribution is the open-source package PyLIT (<strong>Py</strong>thon <strong>L</strong>aplace <strong>I</strong>nverse <strong>T</strong>ransform), which leverages Numba for C-level performance, unifying the presented formulations. PyLIT’s core functionality is kernel construction and optimization. In our applications, we find that PyLIT’s DSF estimates share many qualitative features with other more established methods. Drawing from our insights, we identify three key findings. Firstly, independent of the regularization choice, utilizing non-uniform grid point distributions reduced the number of unknowns and thus reduced our space of possible solutions. Secondly, the <span><math><msub><mi>L</mi><mn>2</mn></msub></math></span>-distance and CDF <span><math><msup><mi>L</mi><mn>2</mn></msup></math></span>-distance, previously unexplored regularizers, benefit from their linear gradients and perform about as well as entriopic regularization. Thirdly, future work can meaningfully combine regularized and stochastic optimization.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109904"},"PeriodicalIF":3.4,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145414643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}