Pub Date : 2025-11-05DOI: 10.1016/j.cpc.2025.109941
Faranak Rajabi, Jacob Fingerman, Andrew Wang, Jeff Moehlis, Frederic Gibou
CASL-HJX is a high-performance C++ framework for solving deterministic and stochastic Hamilton-Jacobi equations in two spatial dimensions. It integrates operator-splitting techniques with implicit treatment of parabolic terms, yielding substantial speedups over explicit methods commonly used for stochastic problems. The solver leverages monotone schemes to ensure convergence to viscosity solutions, for which we provide numerical evidence through systematic validation. The Hamilton-Jacobi-Bellman formulation enables global optimization beyond local methods. This performance advantage opens the door to applications that were previously intractable, including real-time control and rapid design iteration. We demonstrate the framework’s capabilities on benchmark PDEs as well as a neuroscience case study designing energy-efficient controllers for neural populations. The modular architecture allows users to define custom Hamiltonians and boundary conditions, making CASL-HJX broadly applicable to optimal control, front propagation, and uncertainty quantification across finance, engineering, and machine learning. Although currently limited to two spatial dimensions, CASL-HJX addresses critical gaps where gradient-based methods struggle in non-convex landscapes and local optimization yields suboptimal results. Complete source code, documentation, and examples are freely available.
{"title":"CASL-HJX: A comprehensive guide to solving deterministic and stochastic hamilton-Jacobi equations","authors":"Faranak Rajabi, Jacob Fingerman, Andrew Wang, Jeff Moehlis, Frederic Gibou","doi":"10.1016/j.cpc.2025.109941","DOIUrl":"10.1016/j.cpc.2025.109941","url":null,"abstract":"<div><div>CASL-HJX is a high-performance C++ framework for solving deterministic and stochastic Hamilton-Jacobi equations in two spatial dimensions. It integrates operator-splitting techniques with implicit treatment of parabolic terms, yielding substantial speedups over explicit methods commonly used for stochastic problems. The solver leverages monotone schemes to ensure convergence to viscosity solutions, for which we provide numerical evidence through systematic validation. The Hamilton-Jacobi-Bellman formulation enables global optimization beyond local methods. This performance advantage opens the door to applications that were previously intractable, including real-time control and rapid design iteration. We demonstrate the framework’s capabilities on benchmark PDEs as well as a neuroscience case study designing energy-efficient controllers for neural populations. The modular architecture allows users to define custom Hamiltonians and boundary conditions, making CASL-HJX broadly applicable to optimal control, front propagation, and uncertainty quantification across finance, engineering, and machine learning. Although currently limited to two spatial dimensions, CASL-HJX addresses critical gaps where gradient-based methods struggle in non-convex landscapes and local optimization yields suboptimal results. Complete source code, documentation, and examples are freely available.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109941"},"PeriodicalIF":3.4,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145576714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1016/j.cpc.2025.109938
Steven M. McCann, Tim Mercer
Measuring the magnetic characteristics of a magnetic sample, it is critical to evaluate the self-demagnetisation field, because it reduces the effective magnetic field experienced by the sample. The demagnetisation factor depends on the shape and nature of the sample, whether it is a solid, ordered assembly of magnetic elements, or randomly packed magnetic powder in a containing vessel. Literature provides limited information on the demagnetisation factor of packed powders, typically for a restricted number of container shapes. This paper introduces algorithms based on a polar model written in MATLAB 2022b, which calculates not only the average demagnetisation factor but also the entire distribution of demagnetisation factors for the constituent particles and, by extension, to any assembly of magnetic elements within a given volume. Furthermore, this study explains how to enhance the efficiency of these algorithms, reduce runtime, and apply them to any container shape.
The validity of the algorithms was assessed by calculating the data for three common container shapes described in literature over a range of aspect ratios: cuboids, ellipsoids, and cylinders. The calculated mean demagnetisation factors matched those found in the literature, typically within 0.05 %, 0.1 %, and 1 %, respectively, for these shapes, demonstrating that the algorithms could be extrapolated to calculate demagnetisation data for any container shape; by extension, the magnetometric demagnetisation factor (zero susceptibility) for any solid shape, a hitherto unattainable parameter.
As the method reduces to calculations based on geometry alone, it is material-independent and can be applied to any macro-, meso-, or microscale of interest.
{"title":"Calculating the demagnetisation factors and their volume distribution within (a) assemblies of discrete magnetic elements and (b) solid magnetic samples of any given shape: A material-independent and multi-scalar polar model approach","authors":"Steven M. McCann, Tim Mercer","doi":"10.1016/j.cpc.2025.109938","DOIUrl":"10.1016/j.cpc.2025.109938","url":null,"abstract":"<div><div>Measuring the magnetic characteristics of a magnetic sample, it is critical to evaluate the self-demagnetisation field, because it reduces the effective magnetic field experienced by the sample. The demagnetisation factor depends on the shape and nature of the sample, whether it is a solid, ordered assembly of magnetic elements, or randomly packed magnetic powder in a containing vessel. Literature provides limited information on the demagnetisation factor of packed powders, typically for a restricted number of container shapes. This paper introduces algorithms based on a polar model written in MATLAB 2022b, which calculates not only the average demagnetisation factor but also the entire distribution of demagnetisation factors for the constituent particles and, by extension, to any assembly of magnetic elements within a given volume. Furthermore, this study explains how to enhance the efficiency of these algorithms, reduce runtime, and apply them to any container shape.</div><div>The validity of the algorithms was assessed by calculating the data for three common container shapes described in literature over a range of aspect ratios: cuboids, ellipsoids, and cylinders. The calculated mean demagnetisation factors matched those found in the literature, typically within 0.05 %, 0.1 %, and 1 %, respectively, for these shapes, demonstrating that the algorithms could be extrapolated to calculate demagnetisation data for any container shape; by extension, the magnetometric demagnetisation factor (zero susceptibility) for any solid shape, a hitherto unattainable parameter.</div><div>As the method reduces to calculations based on geometry alone, it is material-independent and can be applied to any macro-, meso-, or microscale of interest.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109938"},"PeriodicalIF":3.4,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145517655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1016/j.cpc.2025.109939
Tao Yu, Jidong Zhao
Adaptive mesh refinement (AMR) is essential for accurately resolving interfacial dynamics in resolved coupled computational fluid dynamics-discrete element method (CFD-DEM) and two-phase CFD simulations. However, traditional methods struggle with logical complexity and memory inefficiency when applied to unstructured grids on GPU architectures. This paper presents a novel GPU-accelerated AMR algorithm that eliminates CPU-GPU data transfers and minimizes grid manipulation overhead through a compressed data format and topology-aware reuse strategies. By reconstructing mesh topology entirely on the GPU and retaining parent-mesh indexing, our method reduces AMR-related computational overhead to less than 25% of the total simulation time while ensuring full compatibility with unstructured granular domains. A CUDA-centric implementation, validated across five benchmarks and two powder-based additive manufacturing applications, demonstrates that our framework achieves accuracy comparable to uniformly refined grids with 50% lower computational effort. Furthermore, it exhibits near-linear throughput performance with increasing problem size and achieves over 20 speedup in large-scale laser powder bed fusion simulations when integrated with GPU-accelerated CFD-DEM solvers. The scalability of the algorithm is further highlighted through hexahedral mesh case studies, with extensibility to general unstructured grids via sub-mesh templating. These advancements enable high-fidelity, GPU-native simulations of complex fluid-particle systems, effectively bridging the gap between adaptive resolution and large-scale parallelism in complex two-phase resolved CFD-DEM simulations.
{"title":"GPU-optimized adaptive mesh refinement for scalable two-phase resolved CFD-DEM simulations on unstructured hexahedral grids","authors":"Tao Yu, Jidong Zhao","doi":"10.1016/j.cpc.2025.109939","DOIUrl":"10.1016/j.cpc.2025.109939","url":null,"abstract":"<div><div>Adaptive mesh refinement (AMR) is essential for accurately resolving interfacial dynamics in resolved coupled computational fluid dynamics-discrete element method (CFD-DEM) and two-phase CFD simulations. However, traditional methods struggle with logical complexity and memory inefficiency when applied to unstructured grids on GPU architectures. This paper presents a novel GPU-accelerated AMR algorithm that eliminates CPU-GPU data transfers and minimizes grid manipulation overhead through a compressed data format and topology-aware reuse strategies. By reconstructing mesh topology entirely on the GPU and retaining parent-mesh indexing, our method reduces AMR-related computational overhead to less than 25% of the total simulation time while ensuring full compatibility with unstructured granular domains. A CUDA-centric implementation, validated across five benchmarks and two powder-based additive manufacturing applications, demonstrates that our framework achieves accuracy comparable to uniformly refined grids with 50% lower computational effort. Furthermore, it exhibits near-linear throughput performance with increasing problem size and achieves over 20 <span><math><mo>×</mo></math></span> speedup in large-scale laser powder bed fusion simulations when integrated with GPU-accelerated CFD-DEM solvers. The scalability of the algorithm is further highlighted through hexahedral mesh case studies, with extensibility to general unstructured grids via sub-mesh templating. These advancements enable high-fidelity, GPU-native simulations of complex fluid-particle systems, effectively bridging the gap between adaptive resolution and large-scale parallelism in complex two-phase resolved CFD-DEM simulations.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109939"},"PeriodicalIF":3.4,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145517653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1016/j.cpc.2025.109932
Mahdieh Sadat Mousavi, Faezeh Rahmani
The IRAND (a segmented plastic scintillator IRan ANtineutrino Detector; a 10 × 10 array of plastic scintillation) is Iran’s only segmented antineutrino detector, designed for reactor antineutrino detection in safeguards and reactor monitoring applications. While previous simulations of the IRAND have been conducted, they have been constrained by fixed structural specifications and rigid parameter frameworks, lacking a user-friendly, flexible simulation package essential for advanced simulation-based design. To address this gap, we present IRAND-Sim-02, a novel GUI-based simulation package developed using the Geant4 Monte Carlo toolkit, Qt framework, and Python. This package facilitates real-time, interactive simulations of the IRAND, enabling precise modeling of antineutrino interactions and cosmic muon events, which constitute the dominant background in antineutrino detection. IRAND-Sim-02 offers researchers an intuitive interface to dynamically configure simulation parameters, model event interactions, and generate automated real-time reports on event behaviors. Its user-friendly design ensures accessibility for a broad range of users, including those without prior expertise in Geant4 or Python, thereby streamlining the simulation process and enhancing research efficiency in antineutrino detection and reactor monitoring.
Program summary
Program title: IRAND-Sim-02.
CPC Library link to program files: Data will be available on request.
Licensing provisions: GPLv3.
Programming language: C++, Python.
Nature of problem: The earlier IRAND simulations use fixed structures, rigid parameters, and need Geant4 expertise. This complexity limits adaptability, prevents testing new configurations, and makes the tool inaccessible to users without coding skills, which slows research on antineutrino detection and background events.
Solution method: The solution is IRAND-Sim-02, a flexible, GUI-based simulation package built with Geant4, Qt, and Python. It lets users configure simulation parameters in real time, interact intuitively, and get automated event reports, so even without coding skills they can efficiently simulate the IRAND and antineutrino/cosmic muon events easily.
{"title":"IRAND-Sim-02: A flexible GUI-based simulation package for the IRAND (IRan ANtineutrino Detector)","authors":"Mahdieh Sadat Mousavi, Faezeh Rahmani","doi":"10.1016/j.cpc.2025.109932","DOIUrl":"10.1016/j.cpc.2025.109932","url":null,"abstract":"<div><div>The IRAND (a segmented plastic scintillator IRan ANtineutrino Detector; a 10 × 10 array of plastic scintillation) is Iran’s only segmented antineutrino detector, designed for reactor antineutrino detection in safeguards and reactor monitoring applications. While previous simulations of the IRAND have been conducted, they have been constrained by fixed structural specifications and rigid parameter frameworks, lacking a user-friendly, flexible simulation package essential for advanced simulation-based design. To address this gap, we present IRAND-Sim-02, a novel GUI-based simulation package developed using the Geant4 Monte Carlo toolkit, Qt framework, and Python. This package facilitates real-time, interactive simulations of the IRAND, enabling precise modeling of antineutrino interactions and cosmic muon events, which constitute the dominant background in antineutrino detection. IRAND-Sim-02 offers researchers an intuitive interface to dynamically configure simulation parameters, model event interactions, and generate automated real-time reports on event behaviors. Its user-friendly design ensures accessibility for a broad range of users, including those without prior expertise in Geant4 or Python, thereby streamlining the simulation process and enhancing research efficiency in antineutrino detection and reactor monitoring.</div><div><strong><em>Program summary</em></strong></div><div><em>Program title</em>: IRAND-Sim-02.</div><div><em>CPC Library link to program files</em>: Data will be available on request.</div><div><em>Licensing provisions</em>: GPLv3.</div><div><em>Programming language: C</em>++, Python.</div><div><em>Nature of problem</em>: The earlier IRAND simulations use fixed structures, rigid parameters, and need Geant4 expertise. This complexity limits adaptability, prevents testing new configurations, and makes the tool inaccessible to users without coding skills, which slows research on antineutrino detection and background events.</div><div><em>Solution method</em>: The solution is IRAND-Sim-02, a flexible, GUI-based simulation package built with Geant4, Qt, and Python. It lets users configure simulation parameters in real time, interact intuitively, and get automated event reports, so even without coding skills they can efficiently simulate the IRAND and antineutrino/cosmic muon events easily.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109932"},"PeriodicalIF":3.4,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-02DOI: 10.1016/j.cpc.2025.109919
Laura A. Völker , John M. Abendroth , Christian L. Degen , Konstantin Herb
We present an open-source simulation framework for optically detected magnetic resonance, developed in Python. The framework is designed to simulate multipartite quantum systems composed of spins and electronic levels, enabling the study of systems such as nitrogen-vacancy centers in diamond and photo-generated spin-correlated radical pairs. Our library provides system-specific sub-modules for these and related problems. It supports efficient time-evolution in Lindblad form, along with tools for simulating spatial and generalized stochastic dynamics. Symbolic operator construction and propagation are also supported for simple model systems, making the framework well-suited for classroom instruction in magnetic resonance. Designed to be backend-agnostic, the library interfaces with existing Python packages as computational backends. We introduce the core functionality and illustrate the syntax through a series of representative examples.
{"title":"SimOS: A Python framework for simulations of optically addressable spins","authors":"Laura A. Völker , John M. Abendroth , Christian L. Degen , Konstantin Herb","doi":"10.1016/j.cpc.2025.109919","DOIUrl":"10.1016/j.cpc.2025.109919","url":null,"abstract":"<div><div>We present an open-source simulation framework for optically detected magnetic resonance, developed in Python. The framework is designed to simulate multipartite quantum systems composed of spins and electronic levels, enabling the study of systems such as nitrogen-vacancy centers in diamond and photo-generated spin-correlated radical pairs. Our library provides system-specific sub-modules for these and related problems. It supports efficient time-evolution in Lindblad form, along with tools for simulating spatial and generalized stochastic dynamics. Symbolic operator construction and propagation are also supported for simple model systems, making the framework well-suited for classroom instruction in magnetic resonance. Designed to be backend-agnostic, the library interfaces with existing Python packages as computational backends. We introduce the core functionality and illustrate the syntax through a series of representative examples.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109919"},"PeriodicalIF":3.4,"publicationDate":"2025-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1016/j.cpc.2025.109910
Samad Hajinazar, Eva Zurek
<div><div>Version 14 of <span>XtalOpt</span>, an evolutionary multi-objective global optimization algorithm for crystal structure prediction, is now available for download from its official website <span><span>https://xtalopt.github.io</span><svg><path></path></svg></span>, and the Computer Physics Communications Library. The new version of the code is designed to perform a ground state search for crystal structures with variable compositions by integrating a suite of <em>ab initio</em> methods alongside classical and machine-learning potentials for structural relaxation. The multi-objective search framework has been enhanced through the introduction of Pareto optimization, enabling efficient discovery of functional materials. Herein, we describe the newly implemented methodologies, provide detailed instructions for their use, and present an overview of additional improvements included in the latest version of the code.</div><div><strong>NEW VERSION PROGRAM SUMMARY</strong> <em>Program Title:</em> XtalOpt <em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/jt5pvnnm39.5</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/xtalopt/XtalOpt</span><svg><path></path></svg></span></div><div><em>Code Ocean capsule:</em> (to be added by Technical Editor)</div><div><em>Licensing provisions:</em> 3-clause/BSD.</div><div><em>Programming language:</em> C++. <em>Journal reference of previous version:</em> Computer Physics Communications 304 (2024) 109306. <em>Does the new version supersede the previous version?:</em> Yes.</div><div><em>Reasons for the new version:</em> Implementation of the variable-composition evolutionary search feature and Pareto optimization within the <span>XtalOpt</span> program package.</div><div><em>Summary of revisions:</em> Implemented evolutionary global optimization of structures with variable compositions, the Pareto algorithm for multi-objective optimization, and the multi-cut crossover operation. Various improvements have been made to the user interface, and bugs have been fixed.</div><div><em>Nature of problem:</em> For a given set of chemical constituents the <span>XtalOpt</span> algorithm can search for (meta)stable crystal structures with fixed or varying compositions and optionally with specific functionalities – a grand challenge in computational materials science, chemistry and physics.</div><div><em>Solution method:</em> During the search process, the convex hull of the chemical system is calculated and updated. Instead of enthalpy, the “distance above the convex hull” is used as the target value for global optimization. The genetic operations are revised to enable the evolution of parent structures with different compositions, and to possibly produce new compositions. To further enhance the code’s capability of performing a multi-objective search, the Pareto optimization scheme is implemented. This allows the user to choose from
{"title":"XtalOpt version 14: Variable-composition crystal structure search for functional materials through Pareto optimization","authors":"Samad Hajinazar, Eva Zurek","doi":"10.1016/j.cpc.2025.109910","DOIUrl":"10.1016/j.cpc.2025.109910","url":null,"abstract":"<div><div>Version 14 of <span>XtalOpt</span>, an evolutionary multi-objective global optimization algorithm for crystal structure prediction, is now available for download from its official website <span><span>https://xtalopt.github.io</span><svg><path></path></svg></span>, and the Computer Physics Communications Library. The new version of the code is designed to perform a ground state search for crystal structures with variable compositions by integrating a suite of <em>ab initio</em> methods alongside classical and machine-learning potentials for structural relaxation. The multi-objective search framework has been enhanced through the introduction of Pareto optimization, enabling efficient discovery of functional materials. Herein, we describe the newly implemented methodologies, provide detailed instructions for their use, and present an overview of additional improvements included in the latest version of the code.</div><div><strong>NEW VERSION PROGRAM SUMMARY</strong> <em>Program Title:</em> XtalOpt <em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/jt5pvnnm39.5</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/xtalopt/XtalOpt</span><svg><path></path></svg></span></div><div><em>Code Ocean capsule:</em> (to be added by Technical Editor)</div><div><em>Licensing provisions:</em> 3-clause/BSD.</div><div><em>Programming language:</em> C++. <em>Journal reference of previous version:</em> Computer Physics Communications 304 (2024) 109306. <em>Does the new version supersede the previous version?:</em> Yes.</div><div><em>Reasons for the new version:</em> Implementation of the variable-composition evolutionary search feature and Pareto optimization within the <span>XtalOpt</span> program package.</div><div><em>Summary of revisions:</em> Implemented evolutionary global optimization of structures with variable compositions, the Pareto algorithm for multi-objective optimization, and the multi-cut crossover operation. Various improvements have been made to the user interface, and bugs have been fixed.</div><div><em>Nature of problem:</em> For a given set of chemical constituents the <span>XtalOpt</span> algorithm can search for (meta)stable crystal structures with fixed or varying compositions and optionally with specific functionalities – a grand challenge in computational materials science, chemistry and physics.</div><div><em>Solution method:</em> During the search process, the convex hull of the chemical system is calculated and updated. Instead of enthalpy, the “distance above the convex hull” is used as the target value for global optimization. The genetic operations are revised to enable the evolution of parent structures with different compositions, and to possibly produce new compositions. To further enhance the code’s capability of performing a multi-objective search, the Pareto optimization scheme is implemented. This allows the user to choose from","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109910"},"PeriodicalIF":3.4,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145733047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.cpc.2025.109903
Mengmeng Song , Wei Yang , Qiang Sun , Zhaohui Liu , Ziming Wang , Ye Dong , Hantian Zhang , Qianhong Zhou
The implicit particle-in-cell (PIC) simulation enables a larger time step and cell size to improve computational efficiency, however, an implicit PIC in spherical axisymmetric geometry has been absent. In this paper, a 1D3V electrostatic implicit spherical PIC algorithm is introduced. The algorithm decouples the centrifugal force and electric field force to mitigate the complexity of solving Poisson’s equation. The motion of charged particles are executed through three-step process: first, particles are pre-pushed to solve Poisson’s equation; second, they are pushed with the obtained electric field force; last, the third-pushed is performed under centrifugal force calculated before the first pushing. The accuracy of the implicit spherical PIC algorithm is verified by simulating the floating potential of grain immersed in a collisionless plasma. Compared to the explicit simulation with in plasma spherical expansion model, implicit PIC produces sufficiently precise results even with a large range of time step of and cell size of . Then, the validated algorithm is employed to study the ion acceleration in multicomponent cathode spot collision plasma, demonstrating that the electric field induces velocity separation between light and heavy ion in plasma vacuum expansion stage. The algorithms developed in this paper can be applied to efficiently simulate the kinetic process in spherically symmetrically plasma, such as the laser induced ions source and vacuum arc discharges.
{"title":"1D3V electrostatic implicit particle simulation method in spherical axisymmetric geometry","authors":"Mengmeng Song , Wei Yang , Qiang Sun , Zhaohui Liu , Ziming Wang , Ye Dong , Hantian Zhang , Qianhong Zhou","doi":"10.1016/j.cpc.2025.109903","DOIUrl":"10.1016/j.cpc.2025.109903","url":null,"abstract":"<div><div>The implicit particle-in-cell (PIC) simulation enables a larger time step and cell size to improve computational efficiency, however, an implicit PIC in spherical axisymmetric geometry has been absent. In this paper, a 1D3V electrostatic implicit spherical PIC algorithm is introduced. The algorithm decouples the centrifugal force and electric field force to mitigate the complexity of solving Poisson’s equation. The motion of charged particles are executed through three-step process: first, particles are pre-pushed to solve Poisson’s equation; second, they are pushed with the obtained electric field force; last, the third-pushed is performed under centrifugal force calculated before the first pushing. The accuracy of the implicit spherical PIC algorithm is verified by simulating the floating potential of grain immersed in a collisionless plasma. Compared to the explicit simulation with <span><math><mrow><mn>1</mn><mi>d</mi><mi>r</mi><mo>−</mo><mn>1</mn><mi>d</mi><mi>t</mi></mrow></math></span> in plasma spherical expansion model, implicit PIC produces sufficiently precise results even with a large range of time step of <span><math><mrow><mn>1</mn><mo>−</mo><mn>10</mn><mrow><mi>d</mi><mi>t</mi></mrow></mrow></math></span> and cell size of <span><math><mrow><mn>1</mn><mo>−</mo><mn>6</mn><mrow><mi>d</mi><mi>r</mi></mrow></mrow></math></span>. Then, the validated algorithm is employed to study the ion acceleration in multicomponent cathode spot collision plasma, demonstrating that the electric field induces velocity separation between light and heavy ion in plasma vacuum expansion stage. The algorithms developed in this paper can be applied to efficiently simulate the kinetic process in spherically symmetrically plasma, such as the laser induced ions source and vacuum arc discharges.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109903"},"PeriodicalIF":3.4,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.cpc.2025.109913
K. Topolnicki, S. Sharma, Yu. Volkotrub, M. Das
Different approaches to representing a two dimensional statistical distribution for likelihood optimization and Bayesian inference are investigated. The suggested methods can be generalized to more dimensions, opening up the possibility of using these representations for more complex problems. We use the PyTorch Machine Learning library and all calculations related to the investigated methods are easily expressible within this framework. The capabilities provided by modern Machine Learning libraries are utilized in order to be flexible and applicable to a wide range of problems. Our calculations were performed using a simple statistical toy model, similar in some aspects to techniques used in two dimensional medical imaging. We present numerical results for image reconstruction based on sparse data consisting of only 1000 registered data points and on a larger sample of 80,000 data points.
{"title":"Exploring maximum likelihood and Bayesian approaches for two-dimensional image restoration: A machine learning perspective","authors":"K. Topolnicki, S. Sharma, Yu. Volkotrub, M. Das","doi":"10.1016/j.cpc.2025.109913","DOIUrl":"10.1016/j.cpc.2025.109913","url":null,"abstract":"<div><div>Different approaches to representing a two dimensional statistical distribution for likelihood optimization and Bayesian inference are investigated. The suggested methods can be generalized to more dimensions, opening up the possibility of using these representations for more complex problems. We use the <span>PyTorch</span> Machine Learning library and all calculations related to the investigated methods are easily expressible within this framework. The capabilities provided by modern Machine Learning libraries are utilized in order to be flexible and applicable to a wide range of problems. Our calculations were performed using a simple statistical toy model, similar in some aspects to techniques used in two dimensional medical imaging. We present numerical results for image reconstruction based on sparse data consisting of only 1000 registered data points and on a larger sample of 80,000 data points.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109913"},"PeriodicalIF":3.4,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have developed TTNOpt, a software package that utilizes tree tensor networks (TTNs) for quantum spin systems and high-dimensional data analysis. TTNOpt provides efficient and powerful TTN computations by locally optimizing the network structure, guided by the entanglement pattern of the target tensors. For quantum spin systems, TTNOpt searches for the ground state of Hamiltonians with bilinear spin interactions and magnetic fields, and computes physical properties of these states, including the variational energy, bipartite entanglement entropy (EE), single-site expectation values, and two-site correlation functions. Additionally, TTNOpt can target the lowest-energy state within a specified subspace, provided that the Hamiltonian conserves total magnetization. For high-dimensional data analysis, TTNOpt factorizes complex tensors into TTN states that maximize fidelity to the original tensors by optimizing the tensors and the network. When a TTN is provided as input, TTNOpt reconstructs the network based on the EE without referencing the fidelity of the original state. We present three demonstrations of TTNOpt: (1) Ground-state search for the hierarchical chain model with a system size of 256. The entanglement patterns of the ground state manifest themselves in a tree structure, and TTNOpt successfully identifies the tree. (2) Factorization of a quantic tensor of the dimensions representing a three-variable function where each variant has a weak bit-wise correlation. The optimized TTN shows that its structure isolates the variables from each other. (3) Reconstruction of the matrix product network representing a 16-variable normal distribution characterized by a tree-like correlation structure. TTNOpt can reveal hidden correlation structures of the covariance matrix.
{"title":"TTNOpt: Tree tensor network package for high-rank tensor compression","authors":"Ryo Watanabe , Hidetaka Manabe , Toshiya Hikihara , Hiroshi Ueda","doi":"10.1016/j.cpc.2025.109918","DOIUrl":"10.1016/j.cpc.2025.109918","url":null,"abstract":"<div><div>We have developed TTNOpt, a software package that utilizes tree tensor networks (TTNs) for quantum spin systems and high-dimensional data analysis. TTNOpt provides efficient and powerful TTN computations by locally optimizing the network structure, guided by the entanglement pattern of the target tensors. For quantum spin systems, TTNOpt searches for the ground state of Hamiltonians with bilinear spin interactions and magnetic fields, and computes physical properties of these states, including the variational energy, bipartite entanglement entropy (EE), single-site expectation values, and two-site correlation functions. Additionally, TTNOpt can target the lowest-energy state within a specified subspace, provided that the Hamiltonian conserves total magnetization. For high-dimensional data analysis, TTNOpt factorizes complex tensors into TTN states that maximize fidelity to the original tensors by optimizing the tensors and the network. When a TTN is provided as input, TTNOpt reconstructs the network based on the EE without referencing the fidelity of the original state. We present three demonstrations of TTNOpt: (1) Ground-state search for the hierarchical chain model with a system size of 256. The entanglement patterns of the ground state manifest themselves in a tree structure, and TTNOpt successfully identifies the tree. (2) Factorization of a quantic tensor of the <span><math><msup><mn>2</mn><mn>24</mn></msup></math></span> dimensions representing a three-variable function where each variant has a weak bit-wise correlation. The optimized TTN shows that its structure isolates the variables from each other. (3) Reconstruction of the matrix product network representing a 16-variable normal distribution characterized by a tree-like correlation structure. TTNOpt can reveal hidden correlation structures of the covariance matrix.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109918"},"PeriodicalIF":3.4,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145464402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28DOI: 10.1016/j.cpc.2025.109917
Ziqi Cui, Qihan Ma, Kaikai Feng, Jun Zhang
Hypersonic flows in re-entry missions exhibit multiscale processes and strong non-equilibrium effects, posing significant challenges for numerical simulation. Traditional stochastic particle methods for non-equilibrium gas flows, such as the direct simulation Monte Carlo (DSMC), suffer from order degradation in near-continuum regimes, resulting in reduced accuracy and computational inefficiency. The multiscale stochastic particle (MSP) method based on the Fokker-Planck model has recently emerged as a tailored approach for multiscale non-equilibrium gas flows, specifically designed to maintain high accuracy and computational efficiency even in near-continuum regimes. In this work, the MSP method is extended to diatomic gas flows by incorporating internal energy modes. Specifically, a particle-based Langevin integration scheme is developed to model internal energy relaxation. Building on this formulation, a modified collision step is introduced within the MSP framework, employing the flux correction strategy. The resulting scheme is shown to exhibit second-order temporal accuracy in the near-continuum regime. The proposed method for diatomic gases is validated against a range of benchmark problems, including homogeneous relaxation, normal shock structures, and hypersonic flows over a cylinder and a 70-degree blunted cone. The MSP method provides reliable results with coarser grids and larger time steps, substantially reducing computational cost. These results demonstrate its potential as an efficient and accurate approach for multiscale hypersonic flow simulation.
{"title":"A multiscale stochastic particle method based on the Fokker-Planck model for diatomic gas flows","authors":"Ziqi Cui, Qihan Ma, Kaikai Feng, Jun Zhang","doi":"10.1016/j.cpc.2025.109917","DOIUrl":"10.1016/j.cpc.2025.109917","url":null,"abstract":"<div><div>Hypersonic flows in re-entry missions exhibit multiscale processes and strong non-equilibrium effects, posing significant challenges for numerical simulation. Traditional stochastic particle methods for non-equilibrium gas flows, such as the direct simulation Monte Carlo (DSMC), suffer from order degradation in near-continuum regimes, resulting in reduced accuracy and computational inefficiency. The multiscale stochastic particle (MSP) method based on the Fokker-Planck model has recently emerged as a tailored approach for multiscale non-equilibrium gas flows, specifically designed to maintain high accuracy and computational efficiency even in near-continuum regimes. In this work, the MSP method is extended to diatomic gas flows by incorporating internal energy modes. Specifically, a particle-based Langevin integration scheme is developed to model internal energy relaxation. Building on this formulation, a modified collision step is introduced within the MSP framework, employing the flux correction strategy. The resulting scheme is shown to exhibit second-order temporal accuracy in the near-continuum regime. The proposed method for diatomic gases is validated against a range of benchmark problems, including homogeneous relaxation, normal shock structures, and hypersonic flows over a cylinder and a 70-degree blunted cone. The MSP method provides reliable results with coarser grids and larger time steps, substantially reducing computational cost. These results demonstrate its potential as an efficient and accurate approach for multiscale hypersonic flow simulation.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"319 ","pages":"Article 109917"},"PeriodicalIF":3.4,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145517650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}