Pub Date : 2026-03-01Epub Date: 2025-12-18DOI: 10.1016/j.cpc.2025.109993
A. Diaw, C.A. Johnson, E.A. Unterberg, J. Nichols
OpenEdge is a collaborative, open-source, object-oriented Direct Simulation Monte Carlo (DSMC) code, designed specifically for plasma simulations in magnetic fusion environments. The code features include advanced structures, robust capabilities, and an effective parallelization strategy, all of which significantly enhance performance. It includes specialized modules for managing complex particle interactions, including collisions, ionization/recombination, and reflection/sputtering. Benchmarks and performance analyses have confirmed its efficiency and scalability. Versatile and adaptable, OpenEdge is applied across a broad spectrum of plasma-material interaction studies and charged particle transport in various fusion research settings.
{"title":"OpenEdge: A collaborative, open-source, multi-purpose direct simulation Monte Carlo for plasma simulation in magnetic fusion environments","authors":"A. Diaw, C.A. Johnson, E.A. Unterberg, J. Nichols","doi":"10.1016/j.cpc.2025.109993","DOIUrl":"10.1016/j.cpc.2025.109993","url":null,"abstract":"<div><div>OpenEdge is a collaborative, open-source, object-oriented Direct Simulation Monte Carlo (DSMC) code, designed specifically for plasma simulations in magnetic fusion environments. The code features include advanced structures, robust capabilities, and an effective parallelization strategy, all of which significantly enhance performance. It includes specialized modules for managing complex particle interactions, including collisions, ionization/recombination, and reflection/sputtering. Benchmarks and performance analyses have confirmed its efficiency and scalability. Versatile and adaptable, OpenEdge is applied across a broad spectrum of plasma-material interaction studies and charged particle transport in various fusion research settings.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109993"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-19DOI: 10.1016/j.cpc.2025.110007
Cong-Zhang Gao , Jian-Wei Yin , Ying Cai , Xu Liu , Zheng-Feng Fan , Pei Wang , Shao-Ping Zhu
In recent decades, radiative transfer through the binary stochastic mixtures (i.e., a fraction of particulate high-Z materials are randomly dispersed into the low-Z background material, where the label Z means the atomic number) has received great attention in many scientific and engineering disciplines, accurate and efficient simulations in multidimensions are much in demand. In this work, we primarily focus on the efficient algorithms for accurately simulating radiative transfer in binary stochastic mixtures in two dimensions. Our computational model is to solve the radiation-material coupled equations for an ensemble of binary stochastic mixtures. In this context, a subgrid-based nearest-neighbor searching (SNNS) algorithm is introduced to explicitly model the binary stochastic mixture, resulting in an O(N) scaling with the number of particles, which is more flexible than the fast random sequential addition (RSA) algorithm. In order to accurately determine the grid-based parameters, a particle-resolved algorithm is developed by dividing the relationship between the particle’s location and the grid into four categories, reproducing analytical results exactly and efficiently. A parallel algorithm using the spatial domain decomposition with directed acylic graph (DAG) techniques is proposed to efficiently solve the radiation-material coupled equations. These algorithms are combined to enable accurate and efficient simulations in two dimensions, which is validated by reported benchmark results. We find that convergent results require a sufficiently high resolution of the particle and a high-order quadrature. Although results based on one physical realization are somewhat representative, the ensemble-averaged results are more meaningful to avoid the statistical anomalies in some cases. Moreover, case studies on the influence of particle size distribution, the validation of the effective opacity models, and the particle size effect are presented and analyzed. Our work provides efficient algorithms for routinely simulating radiative transfer in binary stochastic mixtures in multidimensions, which can yield the benchmark results for analytical homogenized models of relevance.
{"title":"Efficient algorithms for accurately simulating radiative transfer in binary stochastic mixtures in two dimensions","authors":"Cong-Zhang Gao , Jian-Wei Yin , Ying Cai , Xu Liu , Zheng-Feng Fan , Pei Wang , Shao-Ping Zhu","doi":"10.1016/j.cpc.2025.110007","DOIUrl":"10.1016/j.cpc.2025.110007","url":null,"abstract":"<div><div>In recent decades, radiative transfer through the binary stochastic mixtures (i.e., a fraction of particulate high-<em>Z</em> materials are randomly dispersed into the low-<em>Z</em> background material, where the label <em>Z</em> means the atomic number) has received great attention in many scientific and engineering disciplines, accurate and efficient simulations in multidimensions are much in demand. In this work, we primarily focus on the efficient algorithms for accurately simulating radiative transfer in binary stochastic mixtures in two dimensions. Our computational model is to solve the radiation-material coupled equations for an ensemble of binary stochastic mixtures. In this context, a subgrid-based nearest-neighbor searching (SNNS) algorithm is introduced to explicitly model the binary stochastic mixture, resulting in an <em>O</em>(<em>N</em>) scaling with the number of particles, which is more flexible than the fast random sequential addition (RSA) algorithm. In order to accurately determine the grid-based parameters, a particle-resolved algorithm is developed by dividing the relationship between the particle’s location and the grid into four categories, reproducing analytical results exactly and efficiently. A parallel algorithm using the spatial domain decomposition with directed acylic graph (DAG) techniques is proposed to efficiently solve the radiation-material coupled equations. These algorithms are combined to enable accurate and efficient simulations in two dimensions, which is validated by reported benchmark results. We find that convergent results require a sufficiently high resolution of the particle and a high-order quadrature. Although results based on one physical realization are somewhat representative, the ensemble-averaged results are more meaningful to avoid the statistical anomalies in some cases. Moreover, case studies on the influence of particle size distribution, the validation of the effective opacity models, and the particle size effect are presented and analyzed. Our work provides efficient algorithms for routinely simulating radiative transfer in binary stochastic mixtures in multidimensions, which can yield the benchmark results for analytical homogenized models of relevance.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 110007"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-04DOI: 10.1016/j.cpc.2025.109984
R. Rossi , A. Murari , T. Craciunescu , N. Rutigliano , I. Wyss , J. Vega , P. Gaudio , M. Gelfusa , on behalf of JET Contributors* and EUROfusion Tokamak Exploitation Team
Autoencoders are neural networks capable of learning compact representations of data through unsupervised learning. By encoding input data into a lower-dimensional space and subsequently reconstructing it, they enable efficient feature extraction, denoising, anomaly detection, and other applications. This work develops autoencoder-based methodologies tailored to time-dependent problems, specifically for reconstructing hidden dynamics, modelling governing equations, and detecting causal relationships.
A physics-informed autoencoder (PIC-AE) is introduced to impose physical or mathematical constraints on the latent representation, allowing the discovery of fundamental dynamics and model parameters. The PIC-AE effectively reconstructs equivalent dynamical systems from indirect measurements, as exemplified by numerical tests based on the Lotka-Volterra system of equations. It has been applied to edge-localized modes (ELMs) in nuclear fusion plasmas to assess whether they follow a Lotka-Volterra model and the results indicate the need for alternative sets of equations.
For causality detection, a novel autoencoder-based method has been developed to overcome the limitations of traditional techniques. This new approach accurately identifies causal relationships while providing a probabilistic measure of their strength. Applied to nuclear fusion data, it has confirmed the causal influence of ion cyclotron resonance heating (ICRH) on sawtooth crashes, reproducing previous findings obtained with different methodologies and extending the analysis to the spatio-temporal domain.
Although initially designed for nuclear fusion applications, the proposed methodologies are broadly applicable to any scientific and technological domain, in which time series analysis is crucial. Indeed, the developed tools have the representational capabilities of deep learning networks but are much less prone to overfitting and can be accurate even whit sparse data. Future work will explore alternative representations for ELMs and further validate the causality detection method across different datasets.
{"title":"On the use of autoencoders to study the dynamics and the causality relations of complex systems with applications to nuclear fusion","authors":"R. Rossi , A. Murari , T. Craciunescu , N. Rutigliano , I. Wyss , J. Vega , P. Gaudio , M. Gelfusa , on behalf of JET Contributors* and EUROfusion Tokamak Exploitation Team","doi":"10.1016/j.cpc.2025.109984","DOIUrl":"10.1016/j.cpc.2025.109984","url":null,"abstract":"<div><div>Autoencoders are neural networks capable of learning compact representations of data through unsupervised learning. By encoding input data into a lower-dimensional space and subsequently reconstructing it, they enable efficient feature extraction, denoising, anomaly detection, and other applications. This work develops autoencoder-based methodologies tailored to time-dependent problems, specifically for reconstructing hidden dynamics, modelling governing equations, and detecting causal relationships.</div><div>A physics-informed autoencoder (PIC-AE) is introduced to impose physical or mathematical constraints on the latent representation, allowing the discovery of fundamental dynamics and model parameters. The PIC-AE effectively reconstructs equivalent dynamical systems from indirect measurements, as exemplified by numerical tests based on the Lotka-Volterra system of equations. It has been applied to edge-localized modes (ELMs) in nuclear fusion plasmas to assess whether they follow a Lotka-Volterra model and the results indicate the need for alternative sets of equations.</div><div>For causality detection, a novel autoencoder-based method has been developed to overcome the limitations of traditional techniques. This new approach accurately identifies causal relationships while providing a probabilistic measure of their strength. Applied to nuclear fusion data, it has confirmed the causal influence of ion cyclotron resonance heating (ICRH) on sawtooth crashes, reproducing previous findings obtained with different methodologies and extending the analysis to the spatio-temporal domain.</div><div>Although initially designed for nuclear fusion applications, the proposed methodologies are broadly applicable to any scientific and technological domain, in which time series analysis is crucial. Indeed, the developed tools have the representational capabilities of deep learning networks but are much less prone to overfitting and can be accurate even whit sparse data. Future work will explore alternative representations for ELMs and further validate the causality detection method across different datasets.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109984"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-18DOI: 10.1016/j.cpc.2025.109995
Stephen E. Gant , Francesco Ricci , Guy Ohad , Ashwin Ramasubramaniam , Leeor Kronik , Jeffrey B. Neaton
We introduce an automated workflow for generating non-empirical Wannier-localized optimally-tuned screened range-separated hybrid (WOT-SRSH) functionals. WOT-SRSH functionals have been shown to yield highly accurate fundamental band gaps, band structures, and optical spectra for bulk and 2D semiconductors and insulators. Our workflow automatically and efficiently determines the WOT-SRSH functional parameters for a given crystal structure and composition, approximately enforcing the correct screened long-range Coulomb interaction and an ionization potential ansatz. In contrast to previous manual tuning approaches, our tuning procedure relies on a new search algorithm that only requires a few hybrid functional calculations with minimal user input. We demonstrate our workflow on 23 previously studied semiconductors and insulators, reporting the same high level of accuracy. By automating the tuning process and improving its computational efficiency, the approach outlined here enables applications of the WOT-SRSH functional to compute spectroscopic and optoelectronic properties for a wide range of materials.
{"title":"Automated workflow for non-empirical Wannier-localized optimal tuning of range-separated hybrid functionals","authors":"Stephen E. Gant , Francesco Ricci , Guy Ohad , Ashwin Ramasubramaniam , Leeor Kronik , Jeffrey B. Neaton","doi":"10.1016/j.cpc.2025.109995","DOIUrl":"10.1016/j.cpc.2025.109995","url":null,"abstract":"<div><div>We introduce an automated workflow for generating non-empirical Wannier-localized optimally-tuned screened range-separated hybrid (WOT-SRSH) functionals. WOT-SRSH functionals have been shown to yield highly accurate fundamental band gaps, band structures, and optical spectra for bulk and 2D semiconductors and insulators. Our workflow automatically and efficiently determines the WOT-SRSH functional parameters for a given crystal structure and composition, approximately enforcing the correct screened long-range Coulomb interaction and an ionization potential ansatz. In contrast to previous manual tuning approaches, our tuning procedure relies on a new search algorithm that only requires a few hybrid functional calculations with minimal user input. We demonstrate our workflow on 23 previously studied semiconductors and insulators, reporting the same high level of accuracy. By automating the tuning process and improving its computational efficiency, the approach outlined here enables applications of the WOT-SRSH functional to compute spectroscopic and optoelectronic properties for a wide range of materials.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109995"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145973029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-23DOI: 10.1016/j.cpc.2025.110010
Taeyoung Jeong , Kun Hee Ye , Seungjae Yoon , Dohyun Kim , Yunjae Kim , Cheol Seong Hwang , Jung-Hae Choi
<div><div>Multiscale modeling, which integrates material properties from <em>ab initio</em> calculations into continuum-scale simulations, is a promising strategy for optimizing semiconductor devices. However, a key challenge remains: while <em>ab initio</em> methods provide diffusion parameters specific to individual migration paths, continuum equations require a single effective set of parameters that captures the overall diffusion behavior. To address this issue, we present <em>VacHopPy</em>, an open-source Python package for vacancy hopping analysis based on molecular dynamics (MD). <em>VacHopPy</em> extracts an effective set of hopping parameters, including hopping distance, hopping barrier, number of effective paths, correlation factor, and attempt frequency, by statistically integrating energetic, kinetic, and geometric contributions across all paths. It also includes tools for tracking vacancy trajectories and for detecting phase transitions during MD simulations. The applicability of <em>VacHopPy</em> is demonstrated in three representative materials: face-centered cubic Al, rutile TiO<sub>2</sub>, and monoclinic HfO<sub>2</sub>. The extracted effective parameters reproduce temperature-dependent diffusion behavior and are in good agreement with previous experimental data. Provided in a simplified form, these parameters are well suited for continuum-scale models and remain valid over a wide temperature range spanning several hundred kelvins. Furthermore, <em>VacHopPy</em> inherently accounts for anisotropy in thermal vibrations, a factor often overlooked, making it suitable for simulating diffusion in complex crystals. Overall, <em>VacHopPy</em> establishes a robust bridge between atomic- and continuum-scale models, enabling more reliable multiscale simulations.</div><div><strong>Program Summary</strong></div><div><em>Program Title: VacHopPy</em></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/nfd44zrb24.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/TY-Jeong/VacHopPy</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> MIT License</div><div><em>Programming language:</em> Python</div><div><em>Supplementary material:</em> Supplementary Figures (S1–S11), Supplementary Tables (S1–S6), and Supplementary Notes (1–4) are provided in a separate PDF file.</div><div><em>Nature of problem:</em> For modeling of vacancy-mediated diffusion, <em>ab initio</em> calculations provide path-specific diffusion parameters that are not directly compatible with continuum-scale models, which typically require a single set of effective parameters. Such incompatibility poses a significant challenge in accurately integrating atomistic diffusion behavior into multiscale simulation frameworks, particularly when multiple hopping paths exist in a material system.</div><div><em>Solution method:</em> Vacancy trajectories are identif
{"title":"VacHopPy: A Python package for vacancy hopping analysis based on molecular dynamics simulations","authors":"Taeyoung Jeong , Kun Hee Ye , Seungjae Yoon , Dohyun Kim , Yunjae Kim , Cheol Seong Hwang , Jung-Hae Choi","doi":"10.1016/j.cpc.2025.110010","DOIUrl":"10.1016/j.cpc.2025.110010","url":null,"abstract":"<div><div>Multiscale modeling, which integrates material properties from <em>ab initio</em> calculations into continuum-scale simulations, is a promising strategy for optimizing semiconductor devices. However, a key challenge remains: while <em>ab initio</em> methods provide diffusion parameters specific to individual migration paths, continuum equations require a single effective set of parameters that captures the overall diffusion behavior. To address this issue, we present <em>VacHopPy</em>, an open-source Python package for vacancy hopping analysis based on molecular dynamics (MD). <em>VacHopPy</em> extracts an effective set of hopping parameters, including hopping distance, hopping barrier, number of effective paths, correlation factor, and attempt frequency, by statistically integrating energetic, kinetic, and geometric contributions across all paths. It also includes tools for tracking vacancy trajectories and for detecting phase transitions during MD simulations. The applicability of <em>VacHopPy</em> is demonstrated in three representative materials: face-centered cubic Al, rutile TiO<sub>2</sub>, and monoclinic HfO<sub>2</sub>. The extracted effective parameters reproduce temperature-dependent diffusion behavior and are in good agreement with previous experimental data. Provided in a simplified form, these parameters are well suited for continuum-scale models and remain valid over a wide temperature range spanning several hundred kelvins. Furthermore, <em>VacHopPy</em> inherently accounts for anisotropy in thermal vibrations, a factor often overlooked, making it suitable for simulating diffusion in complex crystals. Overall, <em>VacHopPy</em> establishes a robust bridge between atomic- and continuum-scale models, enabling more reliable multiscale simulations.</div><div><strong>Program Summary</strong></div><div><em>Program Title: VacHopPy</em></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/nfd44zrb24.1</span><svg><path></path></svg></span></div><div><em>Developer’s repository link:</em> <span><span>https://github.com/TY-Jeong/VacHopPy</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> MIT License</div><div><em>Programming language:</em> Python</div><div><em>Supplementary material:</em> Supplementary Figures (S1–S11), Supplementary Tables (S1–S6), and Supplementary Notes (1–4) are provided in a separate PDF file.</div><div><em>Nature of problem:</em> For modeling of vacancy-mediated diffusion, <em>ab initio</em> calculations provide path-specific diffusion parameters that are not directly compatible with continuum-scale models, which typically require a single set of effective parameters. Such incompatibility poses a significant challenge in accurately integrating atomistic diffusion behavior into multiscale simulation frameworks, particularly when multiple hopping paths exist in a material system.</div><div><em>Solution method:</em> Vacancy trajectories are identif","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 110010"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-16DOI: 10.1016/j.cpc.2025.109992
Navaneet Villodi, Prabhu Ramachandran
The accuracy of meshless methods like Smoothed Particle Hydrodynamics (SPH) is highly dependent on the quality of the particle distribution. Existing particle initialization techniques often struggle to simultaneously achieve adaptive resolution, handle intricate boundaries, and efficiently generate well-packed distributions inside and outside a boundary. This work presents a fast and robust particle initialization method that achieves these goals using standard SPH building blocks. Our approach enables simultaneous initialization of fluid and solid regions, supports arbitrary geometries, and achieves high-quality, quasi-uniform particle arrangements without complex procedures like surface bonding. Extensive results in both 2D and 3D demonstrate that the obtained particle distributions exhibit good boundary conformity, low spatial disorder, and minimal density variation, all with significantly reduced computational cost compared to existing approaches. This work paves the way for automated particle initialization to accurately model flow in and around bodies with meshless methods, particularly with SPH.
{"title":"Rapid variable resolution particle initialization for complex geometries","authors":"Navaneet Villodi, Prabhu Ramachandran","doi":"10.1016/j.cpc.2025.109992","DOIUrl":"10.1016/j.cpc.2025.109992","url":null,"abstract":"<div><div>The accuracy of meshless methods like Smoothed Particle Hydrodynamics (SPH) is highly dependent on the quality of the particle distribution. Existing particle initialization techniques often struggle to simultaneously achieve adaptive resolution, handle intricate boundaries, and efficiently generate well-packed distributions inside and outside a boundary. This work presents a fast and robust particle initialization method that achieves these goals using standard SPH building blocks. Our approach enables simultaneous initialization of fluid and solid regions, supports arbitrary geometries, and achieves high-quality, quasi-uniform particle arrangements without complex procedures like surface bonding. Extensive results in both 2D and 3D demonstrate that the obtained particle distributions exhibit good boundary conformity, low spatial disorder, and minimal density variation, all with significantly reduced computational cost compared to existing approaches. This work paves the way for automated particle initialization to accurately model flow in and around bodies with meshless methods, particularly with SPH.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109992"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-07DOI: 10.1016/j.cpc.2025.109979
Yichi Zhang , Meera Sitharam
This article describes Uniform Cartesian (UC), an efficient deterministic methodology for computing configurational entropy, via relative volume of energy basins. The methodology is specifically tailored for short-ranged and hard-sphere pair-potential assembly systems, but adapts to longer ranged pair potentials. UC both leverages and significantly extends EASAL (Efficient Atlasing and Sampling of Assembly Landscapes), a recent methodology based on modern discrete geometry concepts including topological roadmapping and atlasing. Unique distance-based Cayley coordinate parametrization achieves sampling of nonlinear constrained regions of intrinsic dimension much lower than ambient degrees of freedom, while avoiding gradient descent and retraction maps. Thereby, EASAL navigates the interacting twin curses of dimensionality and topological complexity of nearly disconnected and highly separated configurational regions. Additionally, UC iteratively maps between Cayley and Cartesian coordinates, avoiding illconditioning from Jacobian and Hessian computations; and guarantees correctness, optimal time and space complexity, and efficiency-accuracy tradeoffs using rigorous algorithmic analysis. Variants of UC accurately compute the relative volume of an energy basin for transmembrane protein assembly within hours, even without parallelization. This article’s emphasis is not extensive benchmark comparisons of large-scale parallel implementations or prevailing methods. Rather, proof-of-concept demonstrations of the unique features of UC are given along with test-case comparisons between the Markov Chain Monte Carlo (MCMC) method, the new UC variants, and the “vanilla” EASAL. A curated opensource software implementation is provided.
{"title":"Mixed Cayley and Cartesian sampling for fast and accurate coverage and configurational entropy computation","authors":"Yichi Zhang , Meera Sitharam","doi":"10.1016/j.cpc.2025.109979","DOIUrl":"10.1016/j.cpc.2025.109979","url":null,"abstract":"<div><div>This article describes Uniform Cartesian (UC), an efficient <em>deterministic</em> methodology for computing configurational entropy, via relative volume of energy basins. The methodology is specifically tailored for short-ranged and hard-sphere pair-potential assembly systems, but adapts to longer ranged pair potentials. UC both leverages and significantly extends EASAL (Efficient Atlasing and Sampling of Assembly Landscapes), a recent methodology based on modern discrete geometry concepts including topological roadmapping and atlasing. Unique distance-based Cayley coordinate parametrization achieves sampling of nonlinear constrained regions of intrinsic dimension much lower than ambient degrees of freedom, while avoiding gradient descent and retraction maps. Thereby, EASAL navigates the interacting twin curses of dimensionality and topological complexity of nearly disconnected and highly separated configurational regions. Additionally, UC iteratively maps between Cayley and Cartesian coordinates, avoiding illconditioning from Jacobian and Hessian computations; and guarantees correctness, optimal time and space complexity, and efficiency-accuracy tradeoffs using rigorous algorithmic analysis. Variants of UC accurately compute the relative volume of an energy basin for transmembrane protein assembly within hours, even without parallelization. This article’s emphasis is not extensive benchmark comparisons of large-scale parallel implementations or prevailing methods. Rather, proof-of-concept demonstrations of the unique features of UC are given along with test-case comparisons between the Markov Chain Monte Carlo (MCMC) method, the new UC variants, and the “vanilla” EASAL. A curated opensource software implementation is provided.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109979"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145786468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-18DOI: 10.1016/j.cpc.2025.110002
Josiah Roberts , Biswas Rijal , Simon Divilov , Jon-Paul Maria , William G. Fahrenholtz , Douglas E. Wolfe , Donald W. Brenner , Stefano Curtarolo , Eva Zurek
<div><div>We present the Plan for Robust and Accurate Potentials (PRAPs), a software package for training and using moment tensor potentials (MTPs) in concert with the Machine Learned Interatomic Potentials (MLIP) software package. PRAPs provides an automated workflow to train MTPs using active learning procedures, and a variety of utilities to ease and improve workflows when utilizing the MLIP software. PRAPs was originally developed in the context of crystal structure prediction, in which one calculates convex hulls and predicts low energy metastable and thermodynamically stable structures, but the potentials PRAPs develops are not limited to such applications. PRAPs produces two potentials, one capable of rough estimates of the energies, forces and stresses of almost any chemical structure in the specified compositional space – the Robust Potential – and a second potential intended to provide more accurate descriptions of ground state and metastable structures – the Accurate Potential. We also present a Python library, <em>mliputils</em>, designed to assist users in working with the chemical structural files used by the MLIP package.</div></div><div><h3>PROGRAM SUMMARY</h3><div><em>Program Title:</em> The Plan for Robust and Accurate Potentials (PRAPs)</div><div><em>CPC Library link to program files:</em> (to be added by Technical Editor)</div><div><em>Developer’s repository link:</em> <span><span>https://github.com/Dryctarth/PRAPs.git</span><svg><path></path></svg></span></div><div><em>Code Ocean capsule:</em> (to be added by Technical Editor)</div><div><em>Licensing provisions(please choose one):</em> BSD 3-clause</div><div><em>Programming language:</em> Bash, Python</div><div><em>Supplementary material:</em> User manual</div><div><em>Nature of problem:</em> Keeping track of all the steps involved in training moment tensor potentials across several systems has proven to be a challenge in need of project management. For every large step, like training, there are several small, mundane commands that need to be handled, and these must all be repeated identically across any chemical system users may care about (while tracking variations). Finally, communication must be made between the AFLOW, MLIP, and VASP programs.</div><div><em>Solution method:</em> The PRAPs package incorporates a degree of automation, handling the different job submissions and tasks needed to train multiple moment tensor potentials, file management, identifying and removing unphysical chemical structures, and performing some analytical tasks. The package also includes some simple utility functions to allow users to better read, write, and manipulate MLIP’s chemical structure file format.</div><div><em>Additional comments including restrictions and unusual features:</em> Requires a local installation of Automatic FLOW (AFLOW) v3.10+, the Vienna <em>ab initio</em> Software Package (VASP) v5+, and the Machine Learning for Interatomic Potentials (MLIP) v2+ program packages.</di
{"title":"A software package for generating robust and accurate potentials using the moment tensor potential framework","authors":"Josiah Roberts , Biswas Rijal , Simon Divilov , Jon-Paul Maria , William G. Fahrenholtz , Douglas E. Wolfe , Donald W. Brenner , Stefano Curtarolo , Eva Zurek","doi":"10.1016/j.cpc.2025.110002","DOIUrl":"10.1016/j.cpc.2025.110002","url":null,"abstract":"<div><div>We present the Plan for Robust and Accurate Potentials (PRAPs), a software package for training and using moment tensor potentials (MTPs) in concert with the Machine Learned Interatomic Potentials (MLIP) software package. PRAPs provides an automated workflow to train MTPs using active learning procedures, and a variety of utilities to ease and improve workflows when utilizing the MLIP software. PRAPs was originally developed in the context of crystal structure prediction, in which one calculates convex hulls and predicts low energy metastable and thermodynamically stable structures, but the potentials PRAPs develops are not limited to such applications. PRAPs produces two potentials, one capable of rough estimates of the energies, forces and stresses of almost any chemical structure in the specified compositional space – the Robust Potential – and a second potential intended to provide more accurate descriptions of ground state and metastable structures – the Accurate Potential. We also present a Python library, <em>mliputils</em>, designed to assist users in working with the chemical structural files used by the MLIP package.</div></div><div><h3>PROGRAM SUMMARY</h3><div><em>Program Title:</em> The Plan for Robust and Accurate Potentials (PRAPs)</div><div><em>CPC Library link to program files:</em> (to be added by Technical Editor)</div><div><em>Developer’s repository link:</em> <span><span>https://github.com/Dryctarth/PRAPs.git</span><svg><path></path></svg></span></div><div><em>Code Ocean capsule:</em> (to be added by Technical Editor)</div><div><em>Licensing provisions(please choose one):</em> BSD 3-clause</div><div><em>Programming language:</em> Bash, Python</div><div><em>Supplementary material:</em> User manual</div><div><em>Nature of problem:</em> Keeping track of all the steps involved in training moment tensor potentials across several systems has proven to be a challenge in need of project management. For every large step, like training, there are several small, mundane commands that need to be handled, and these must all be repeated identically across any chemical system users may care about (while tracking variations). Finally, communication must be made between the AFLOW, MLIP, and VASP programs.</div><div><em>Solution method:</em> The PRAPs package incorporates a degree of automation, handling the different job submissions and tasks needed to train multiple moment tensor potentials, file management, identifying and removing unphysical chemical structures, and performing some analytical tasks. The package also includes some simple utility functions to allow users to better read, write, and manipulate MLIP’s chemical structure file format.</div><div><em>Additional comments including restrictions and unusual features:</em> Requires a local installation of Automatic FLOW (AFLOW) v3.10+, the Vienna <em>ab initio</em> Software Package (VASP) v5+, and the Machine Learning for Interatomic Potentials (MLIP) v2+ program packages.</di","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 110002"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145920954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-11-26DOI: 10.1016/j.cpc.2025.109954
Mehmet Ali Sarsıl , Mubashir Mansoor , Mert Saraçoğlu , Servet Timur , Onur Ergen
<div><div>The diverse spectrum of material characteristics, including band gap, mechanical moduli, color, phonon and electronic density of states, along with catalytic and surface properties, are intricately intertwined with the atomic structure and the corresponding interatomic bond lengths. This interconnection extends to the manifestation of interplanar spacings within a crystalline lattice. Analysis of these interplanar spacings and the comprehension of any deviations-whether it be lattice compression or expansion, commonly referred to as strain, hold paramount significance in unraveling various unknowns within the field. Transmission Electron Microscopy (TEM) is widely used to capture these atomic-scale ordering, facilitating direct investigation of interplanar spacings. However, creating critical contour maps for visualizing and interpreting lattice stresses in TEM images remains a challenging task. This study introduces an open-source, AI-assisted application, developed entirely in Python, for processing TEM images to facilitate strain analysis through advanced visualization techniques. This application is designed to process a diverse range of materials, including nanoparticles, 2D materials, pure crystals, and solid solutions. By converting local variations in interplanar spacings into contour maps, it provides a visual representation of lattice expansion and compression. With highly versatile settings, as detailed in this paper, the tool is readily accessible for TEM image-based material analysis. It facilitates an in-depth exploration of strain engineering by generating strain contour maps at the atomic scale, offering valuable insights into material properties. <strong>Program summary</strong> <em>Program Title:</em> PyNanoSpacing <em>CPC Library link to program files:</em> “<span><span>https://doi.org/10.17632/y864t5ykxx.1</span><svg><path></path></svg></span> ” <em>Developer’s repository link:</em> “<span><span>https://github.com/malisarsil/PyNanoSpacing</span><svg><path></path></svg></span> ” <em>Licensing provisions:</em> MIT license <em>Programming language:</em> Python 3.11 <em>Nature of problem:</em> Transmission Electron Microscopy (TEM) is widely used to analyze lattice structures in materials, but extracting quantitative strain information from TEM images remains challenging. Existing tools often lack automation, requiring manual calibration and region selection, leading to inconsistencies. Researchers need a user-friendly, automated solution to analyze local lattice strains and interplanar spacing variations efficiently. <em>Solution method:</em> The developed desktop application simplifies TEM image strain analysis by automating key steps. It extracts image details (such as scale and resolution) and detects atomic regions using AI-based segmentation. A correction step ensures proper alignment before measuring interlayer distances, which are then color-mapped to show strain variations. A smoothing technique is applied to re
{"title":"Mapping strain at the atomic scale with PyNanospacing: An AI-assisted approach to TEM image processing and visualization","authors":"Mehmet Ali Sarsıl , Mubashir Mansoor , Mert Saraçoğlu , Servet Timur , Onur Ergen","doi":"10.1016/j.cpc.2025.109954","DOIUrl":"10.1016/j.cpc.2025.109954","url":null,"abstract":"<div><div>The diverse spectrum of material characteristics, including band gap, mechanical moduli, color, phonon and electronic density of states, along with catalytic and surface properties, are intricately intertwined with the atomic structure and the corresponding interatomic bond lengths. This interconnection extends to the manifestation of interplanar spacings within a crystalline lattice. Analysis of these interplanar spacings and the comprehension of any deviations-whether it be lattice compression or expansion, commonly referred to as strain, hold paramount significance in unraveling various unknowns within the field. Transmission Electron Microscopy (TEM) is widely used to capture these atomic-scale ordering, facilitating direct investigation of interplanar spacings. However, creating critical contour maps for visualizing and interpreting lattice stresses in TEM images remains a challenging task. This study introduces an open-source, AI-assisted application, developed entirely in Python, for processing TEM images to facilitate strain analysis through advanced visualization techniques. This application is designed to process a diverse range of materials, including nanoparticles, 2D materials, pure crystals, and solid solutions. By converting local variations in interplanar spacings into contour maps, it provides a visual representation of lattice expansion and compression. With highly versatile settings, as detailed in this paper, the tool is readily accessible for TEM image-based material analysis. It facilitates an in-depth exploration of strain engineering by generating strain contour maps at the atomic scale, offering valuable insights into material properties. <strong>Program summary</strong> <em>Program Title:</em> PyNanoSpacing <em>CPC Library link to program files:</em> “<span><span>https://doi.org/10.17632/y864t5ykxx.1</span><svg><path></path></svg></span> ” <em>Developer’s repository link:</em> “<span><span>https://github.com/malisarsil/PyNanoSpacing</span><svg><path></path></svg></span> ” <em>Licensing provisions:</em> MIT license <em>Programming language:</em> Python 3.11 <em>Nature of problem:</em> Transmission Electron Microscopy (TEM) is widely used to analyze lattice structures in materials, but extracting quantitative strain information from TEM images remains challenging. Existing tools often lack automation, requiring manual calibration and region selection, leading to inconsistencies. Researchers need a user-friendly, automated solution to analyze local lattice strains and interplanar spacing variations efficiently. <em>Solution method:</em> The developed desktop application simplifies TEM image strain analysis by automating key steps. It extracts image details (such as scale and resolution) and detects atomic regions using AI-based segmentation. A correction step ensures proper alignment before measuring interlayer distances, which are then color-mapped to show strain variations. A smoothing technique is applied to re","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 109954"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145681710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-18DOI: 10.1016/j.cpc.2025.110004
Alberto Cuadra, César Huete, Marcos Vera
The Combustion Toolbox (CT) is a newly developed open-source thermochemical code designed to solve problems involving chemical equilibrium for both gas- and condensed-phase species. The kernel of the code is based on the theoretical framework set forth by NASA’s computer program CEA (Chemical Equilibrium with Applications) while incorporating new algorithms that significantly improve both convergence rate and robustness. The thermochemical properties are computed under the ideal gas approximation using an up-to-date version of NASA’s 9-coefficient polynomial fits. These fits use the Third Millennium database, which includes the available values from Active Thermochemical Tables. Combustion Toolbox is programmed in MATLAB with an object-oriented architecture composed of three main modules: CT-EQUIL, CT-SD, and CT-ROCKET. The kernel module, CT-EQUIL, minimizes the Gibbs/Helmholtz free energy of the system using the technique of Lagrange multipliers combined with a multidimensional Newton-Raphson method, upon the condition that two state functions are used to define the mixture properties (e.g., enthalpy and pressure). CT-SD solves processes involving strong changes in dynamic pressure, such as steady shock and detonation waves under normal and oblique incidence angles. Finally, CT-ROCKET estimates rocket engine performance under highly idealized conditions. The new tool is equipped with a versatile Graphical User Interface and has been successfully used for teaching and research activities over the last six years. Results are in excellent agreement with CEA, Cantera within Caltech’s Shock and Detonation Toolbox (SD-Toolbox), and the Thermochemical Equilibrium Abundances (TEA) code. CT is available under an open-source GPLv3 license via GitHub https://github.com/CombustionToolbox/combustion_toolbox, and its documentation can be found in https://combustion-toolbox-website.readthedocs.io.
{"title":"Combustion Toolbox: An open-source thermochemical code for gas- and condensed-phase problems involving chemical equilibrium","authors":"Alberto Cuadra, César Huete, Marcos Vera","doi":"10.1016/j.cpc.2025.110004","DOIUrl":"10.1016/j.cpc.2025.110004","url":null,"abstract":"<div><div>The Combustion Toolbox (CT) is a newly developed open-source thermochemical code designed to solve problems involving chemical equilibrium for both gas- and condensed-phase species. The kernel of the code is based on the theoretical framework set forth by NASA’s computer program CEA (Chemical Equilibrium with Applications) while incorporating new algorithms that significantly improve both convergence rate and robustness. The thermochemical properties are computed under the ideal gas approximation using an up-to-date version of NASA’s 9-coefficient polynomial fits. These fits use the Third Millennium database, which includes the available values from Active Thermochemical Tables. Combustion Toolbox is programmed in MATLAB with an object-oriented architecture composed of three main modules: CT-EQUIL, CT-SD, and CT-ROCKET. The kernel module, CT-EQUIL, minimizes the Gibbs/Helmholtz free energy of the system using the technique of Lagrange multipliers combined with a multidimensional Newton-Raphson method, upon the condition that two state functions are used to define the mixture properties (e.g., enthalpy and pressure). CT-SD solves processes involving strong changes in dynamic pressure, such as steady shock and detonation waves under normal and oblique incidence angles. Finally, CT-ROCKET estimates rocket engine performance under highly idealized conditions. The new tool is equipped with a versatile Graphical User Interface and has been successfully used for teaching and research activities over the last six years. Results are in excellent agreement with CEA, Cantera within Caltech’s Shock and Detonation Toolbox (SD-Toolbox), and the Thermochemical Equilibrium Abundances (TEA) code. CT is available under an open-source GPLv3 license via GitHub <span><span>https://github.com/CombustionToolbox/combustion_toolbox</span><svg><path></path></svg></span>, and its documentation can be found in <span><span>https://combustion-toolbox-website.readthedocs.io</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"320 ","pages":"Article 110004"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}