Pub Date : 2024-07-04DOI: 10.1016/j.cpc.2024.109305
David Vallés-Pérez , Susana Planelles , Vicent Quilis , Frederick Groth , Tirso Marin-Gilabert , Klaus Dolag
Astrophysical turbulent flows display an intrinsically multi-scale nature, making their numerical simulation and the subsequent analyses of simulated data a complex problem. In particular, two fundamental steps in the study of turbulent velocity fields are the Helmholtz-Hodge decomposition (compressive+solenoidal; HHD) and the Reynolds decomposition (bulk+turbulent; RD). These problems are relatively simple to perform numerically for uniformly-sampled data, such as the one emerging from Eulerian, fix-grid simulations; but their computation is remarkably more complex in the case of non-uniformly sampled data, such as the one stemming from particle-based or meshless simulations. In this paper, we describe, implement and test vortex-p, a publicly available tool evolved from the vortex code, to perform both these decompositions upon the velocity fields of particle-based simulations, either from smoothed particle hydrodynamics (SPH), moving-mesh or meshless codes. The algorithm relies on the creation of an ad-hoc adaptive mesh refinement (AMR) set of grids, on which the input velocity field is represented. HHD is then addressed by means of elliptic solvers, while for the RD we adapt an iterative, multi-scale filter. We perform a series of idealised tests to assess the accuracy, convergence and scaling of the code. Finally, we present some applications of the code to various SPH and meshless finite-mass (MFM) simulations of galaxy clusters performed with OpenGadget3, with different resolutions and physics, to showcase the capabilities of the code.
{"title":"vortex-p: A Helmholtz-Hodge and Reynolds decomposition algorithm for particle-based simulations","authors":"David Vallés-Pérez , Susana Planelles , Vicent Quilis , Frederick Groth , Tirso Marin-Gilabert , Klaus Dolag","doi":"10.1016/j.cpc.2024.109305","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109305","url":null,"abstract":"<div><p>Astrophysical turbulent flows display an intrinsically multi-scale nature, making their numerical simulation and the subsequent analyses of simulated data a complex problem. In particular, two fundamental steps in the study of turbulent velocity fields are the Helmholtz-Hodge decomposition (compressive+solenoidal; HHD) and the Reynolds decomposition (bulk+turbulent; RD). These problems are relatively simple to perform numerically for uniformly-sampled data, such as the one emerging from Eulerian, fix-grid simulations; but their computation is remarkably more complex in the case of non-uniformly sampled data, such as the one stemming from particle-based or meshless simulations. In this paper, we describe, implement and test <span>vortex-p</span>, a publicly available tool evolved from the <span>vortex</span> code, to perform both these decompositions upon the velocity fields of particle-based simulations, either from smoothed particle hydrodynamics (SPH), moving-mesh or meshless codes. The algorithm relies on the creation of an ad-hoc adaptive mesh refinement (AMR) set of grids, on which the input velocity field is represented. HHD is then addressed by means of elliptic solvers, while for the RD we adapt an iterative, multi-scale filter. We perform a series of idealised tests to assess the accuracy, convergence and scaling of the code. Finally, we present some applications of the code to various SPH and meshless finite-mass (MFM) simulations of galaxy clusters performed with <span>OpenGadget3</span>, with different resolutions and physics, to showcase the capabilities of the code.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010465524002285/pdfft?md5=c598207a6435a4989865b3be3b66e73f&pid=1-s2.0-S0010465524002285-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1016/j.cpc.2024.109301
Junxiang Yang , Jian Wang , Soobin Kwak , Seokjun Ham , Junseok Kim
In this article, we propose a modified Allen–Cahn (AC) equation with a space-dependent interfacial parameter. When numerically solving the AC equation with a constant interfacial parameter over large domains, a substantial number of grid points are essential, which leads to significant computational costs. To effectively resolve this problem, numerous adaptive mesh techniques have been developed and implemented. These methods use locally refined meshes that adaptively track the interfacial positions of the phase field throughout the simulation. However, the data structures for adaptive algorithms are generally complex, and the problems to be solved may involve challenges at multiple scales. In this article, we present a modified AC equation with a mesh size-dependent interfacial parameter on a triangular mesh to efficiently solve multi-scale problems. In the proposed method, a triangular mesh is used, and the interfacial parameter value at a node point is defined as a function of the average length of the edges connected to the node point. The proposed algorithm effectively uses large and small values of the interfacial parameter on coarse and fine meshes, respectively. To demonstrate the efficiency and superior performance of the proposed method, we conduct several representative numerical experiments. The computational results indicate that the proposed interfacial function can adequately evolve the multi-scale phase interfaces without excessive relaxation or freezing of the interfaces. Finally, we provide the main source code for the methodology, including mesh generation as described in this paper, so that interested readers can use it.
在这篇文章中,我们提出了一个修正的艾伦-卡恩(AC)方程,该方程的界面参数与空间有关。在大域范围内对具有恒定界面参数的 AC 方程进行数值求解时,必须使用大量网格点,这将导致巨大的计算成本。为有效解决这一问题,人们开发并实施了大量自适应网格技术。这些方法使用局部细化网格,在整个模拟过程中自适应地跟踪相场的界面位置。然而,自适应算法的数据结构通常比较复杂,要解决的问题可能涉及多个尺度的挑战。在本文中,我们提出了一种修改后的交流方程,该方程在三角形网格上具有与网格尺寸相关的界面参数,可高效解决多尺度问题。在所提出的方法中,使用了三角形网格,节点点上的界面参数值被定义为与节点点相连的边的平均长度的函数。所提出的算法在粗网格和细网格上分别有效地使用了界面参数的大值和小值。为了证明所提方法的效率和优越性能,我们进行了几个有代表性的数值实验。计算结果表明,所提出的界面函数可以充分演化多尺度相界面,而不会造成界面过度松弛或冻结。最后,我们提供了该方法的主要源代码,包括本文所述的网格生成,以便感兴趣的读者使用。
{"title":"A modified Allen–Cahn equation with a mesh size-dependent interfacial parameter on a triangular mesh","authors":"Junxiang Yang , Jian Wang , Soobin Kwak , Seokjun Ham , Junseok Kim","doi":"10.1016/j.cpc.2024.109301","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109301","url":null,"abstract":"<div><p>In this article, we propose a modified Allen–Cahn (AC) equation with a space-dependent interfacial parameter. When numerically solving the AC equation with a constant interfacial parameter over large domains, a substantial number of grid points are essential, which leads to significant computational costs. To effectively resolve this problem, numerous adaptive mesh techniques have been developed and implemented. These methods use locally refined meshes that adaptively track the interfacial positions of the phase field throughout the simulation. However, the data structures for adaptive algorithms are generally complex, and the problems to be solved may involve challenges at multiple scales. In this article, we present a modified AC equation with a mesh size-dependent interfacial parameter on a triangular mesh to efficiently solve multi-scale problems. In the proposed method, a triangular mesh is used, and the interfacial parameter value at a node point is defined as a function of the average length of the edges connected to the node point. The proposed algorithm effectively uses large and small values of the interfacial parameter on coarse and fine meshes, respectively. To demonstrate the efficiency and superior performance of the proposed method, we conduct several representative numerical experiments. The computational results indicate that the proposed interfacial function can adequately evolve the multi-scale phase interfaces without excessive relaxation or freezing of the interfaces. Finally, we provide the main source code for the methodology, including mesh generation as described in this paper, so that interested readers can use it.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1016/j.cpc.2024.109304
The web-based application NEBOAS was developed to calculate the neutron yield of the accelerator-based neutron source MONNET (MONo-energetic NEutron Tower) at JRC-Geel. Neutrons are produced with p and d-induced reactions on particular targets. NEBOAS provides double differential neutron yields, as a function of the angle of neutron emission and energy. It provides also integrated neutron yields, and neutron energy spectra. Any projectile energy may be utilized, as long as it is covered by the nuclear data available on thin targets (that just degrade the projectiles' energy) or thick targets (that stop the projectiles). The calculation employs reaction cross-section evaluations from Evaluated Nuclear Data File (ENDF) or compilations of experimental measurements from the Experimental Nuclear Reaction Data (EXFOR), and stopping powers as recommended in the National Institute of Standards and Technology (NIST) or the Stopping and Range of Ions in Matter (SRIM-2013) databases.
开发基于网络的应用程序 NEBOAS 是为了计算 JRC-Geel 基于加速器的中子源 MONNET(MONo-energetic NEutron Tower)的中子产率。中子是通过特定靶上的 p 和 d 诱导反应产生的。NEBOAS 提供双差分中子产率,作为中子发射角和能量的函数。它还提供综合中子产率和中子能谱。任何射弹能量都可以使用,只要它能被薄目标(仅降低射弹能量)或厚目标(阻止射弹)的核数据所覆盖。计算采用的是 "评估核数据文件"(ENDF)中的反应截面评估或 "实验核反应数据"(EXFOR)中的实验测量汇编,以及美国国家标准与技术研究院(NIST)或 "物质中离子的停止和射程"(SRIM-2013)数据库中推荐的停止功率。
{"title":"NEBOAS: A Neutron yiElds Based On AcceleratorS application","authors":"","doi":"10.1016/j.cpc.2024.109304","DOIUrl":"10.1016/j.cpc.2024.109304","url":null,"abstract":"<div><p>The web-based application NEBOAS was developed to calculate the neutron yield of the accelerator-based neutron source MONNET (MONo-energetic NEutron Tower) at JRC-Geel. Neutrons are produced with <em>p</em> and <em>d</em>-induced reactions on particular targets. NEBOAS provides double differential neutron yields, as a function of the angle of neutron emission and energy. It provides also integrated neutron yields, and neutron energy spectra. Any projectile energy may be utilized, as long as it is covered by the nuclear data available on thin targets (that just degrade the projectiles' energy) or thick targets (that stop the projectiles). The calculation employs reaction cross-section evaluations from Evaluated Nuclear Data File (ENDF) or compilations of experimental measurements from the Experimental Nuclear Reaction Data (EXFOR), and stopping powers as recommended in the National Institute of Standards and Technology (NIST) or the Stopping and Range of Ions in Matter (SRIM-2013) databases.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141691651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.cpc.2024.109298
Jeffrey Lazar , Stephan Meighen-Berger , Christian Haack , David Kim , Santiago Giner , Carlos A. Argüelles
Neutrino telescopes are gigaton-scale neutrino detectors comprised of individual light-detection units. Though constructed from simple building blocks, they have opened a new window to the Universe and are able to probe center-of-mass energies that are comparable to those of collider experiments. Prometheus is a new, open-source simulation tailored for this kind of detector. Our package, which is written in a combination of C++ and Python provides a balance of ease of use and performance and allows the user to simulate a neutrino telescope with arbitrary geometry deployed in ice or water. Prometheus simulates the neutrino interactions in the volume surrounding the detector, computes the light yield of the hadronic shower and the out-going lepton, propagates the photons in the medium, and records their arrival times and position in user-defined regions. Finally, Prometheus events are serialized into a parquet file, which is a compact and interoperational file format that allows prompt access to the events for further analysis.
Program summary
Program title:Prometheus
CPC Library link to program files:https://doi.org/10.17632/svwyd4rd83.1
Licensing provisions: GNU Lesser General Public License 2.1
Programming language:Python
Nature of problem: Simulation of neutrino telescopes in ice and water.
Solution method: Monte Carlo methods.
中微子望远镜是由单个光探测单元组成的千兆吨级中微子探测器。虽然由简单的积木搭建而成,但它们为宇宙打开了一扇新窗口,能够探测与对撞机实验相当的质量中心能量。普罗米修斯(Prometheus)是为这种探测器量身定制的全新开源模拟软件。我们的软件包是用 C++ 和 Python 编写的,兼顾了易用性和性能,允许用户模拟部署在冰或水中的具有任意几何形状的中微子望远镜。普罗米修斯模拟探测器周围体积内的中微子相互作用,计算强子阵雨和出射轻子的光产率,在介质中传播光子,并记录它们在用户定义区域内的到达时间和位置。最后,Prometheus 事件会被序列化为一个 parquet 文件,这是一种紧凑的互操作文件格式,可以快速访问事件以进行进一步分析:PrometheusCPC Library 程序文件链接:https://doi.org/10.17632/svwyd4rd83.1Developer's repository 链接:https://github.com/Harvard-Neutrino/prometheusLicensing 规定:GNU Lesser General Public License 2.1编程语言:问题性质:模拟冰和水中的中微子望远镜:蒙特卡洛方法。
{"title":"Prometheus: An open-source neutrino telescope simulation","authors":"Jeffrey Lazar , Stephan Meighen-Berger , Christian Haack , David Kim , Santiago Giner , Carlos A. Argüelles","doi":"10.1016/j.cpc.2024.109298","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109298","url":null,"abstract":"<div><p>Neutrino telescopes are gigaton-scale neutrino detectors comprised of individual light-detection units. Though constructed from simple building blocks, they have opened a new window to the Universe and are able to probe center-of-mass energies that are comparable to those of collider experiments. <span>Prometheus</span> is a new, open-source simulation tailored for this kind of detector. Our package, which is written in a combination of <span>C++</span> and <span>Python</span> provides a balance of ease of use and performance and allows the user to simulate a neutrino telescope with arbitrary geometry deployed in ice or water. <span>Prometheus</span> simulates the neutrino interactions in the volume surrounding the detector, computes the light yield of the hadronic shower and the out-going lepton, propagates the photons in the medium, and records their arrival times and position in user-defined regions. Finally, <span>Prometheus</span> events are serialized into a <span>parquet</span> file, which is a compact and interoperational file format that allows prompt access to the events for further analysis.</p></div><div><h3>Program summary</h3><p><em>Program title:</em> <span>Prometheus</span></p><p><em>CPC Library link to program files:</em> <span>https://doi.org/10.17632/svwyd4rd83.1</span><svg><path></path></svg></p><p><em>Developer's repository link:</em> <span>https://github.com/Harvard-Neutrino/prometheus</span><svg><path></path></svg></p><p><em>Licensing provisions:</em> GNU Lesser General Public License 2.1</p><p><em>Programming language:</em> <span>Python</span></p><p><em>Nature of problem:</em> Simulation of neutrino telescopes in ice and water.</p><p><em>Solution method:</em> Monte Carlo methods.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141607668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.cpc.2024.109300
Sitong Zhang , Xingyu Gao , Haifeng Song , Bin Wen
The generalized stacking fault energy (GSFE) stands as a fundamental yet pivotal parameter for the plastic deformation of materials. In our investigation, we conduct first-principles calculations using the full-potential linearized augmented planewave (FLAPW) method to assess the GSFE, employing both single-shift and triple-shift supercell models. Different defects in these models result in different impacts on the self-consistent field (SCF) iterations and atomic relaxation. We propose an adaptive preconditioning scheme that can identify the long-wavelength divergence behavior of the Jacobian during the SCF iteration and automatically switch on the Kerker preconditioning to accelerate the convergence without any prior information. We implement this algorithm based on Elk-7.2.42 package and calculate the GSFE curves for the (111) plane along direction of Al, Cu, and Si. The results indicate that defects induced by the vacuum layer in the single-shift supercell model negatively impact the convergence of SCF iterations and atomic relaxation, therefore the triple-shift supercell model is more recommended.
{"title":"An adaptive preconditioning scheme for the self-consistent field iteration and generalized stacking fault energy calculations","authors":"Sitong Zhang , Xingyu Gao , Haifeng Song , Bin Wen","doi":"10.1016/j.cpc.2024.109300","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109300","url":null,"abstract":"<div><p>The generalized stacking fault energy (GSFE) stands as a fundamental yet pivotal parameter for the plastic deformation of materials. In our investigation, we conduct first-principles calculations using the full-potential linearized augmented planewave (FLAPW) method to assess the GSFE, employing both single-shift and triple-shift supercell models. Different defects in these models result in different impacts on the self-consistent field (SCF) iterations and atomic relaxation. We propose an adaptive preconditioning scheme that can identify the long-wavelength divergence behavior of the Jacobian during the SCF iteration and automatically switch on the Kerker preconditioning to accelerate the convergence without any prior information. We implement this algorithm based on Elk-7.2.42 package and calculate the GSFE curves for the (111) plane along <span><math><mo>〈</mo><mover><mrow><mn>1</mn></mrow><mrow><mo>¯</mo></mrow></mover><mover><mrow><mn>1</mn></mrow><mrow><mo>¯</mo></mrow></mover><mn>2</mn><mo>〉</mo></math></span> direction of Al, Cu, and Si. The results indicate that defects induced by the vacuum layer in the single-shift supercell model negatively impact the convergence of SCF iterations and atomic relaxation, therefore the triple-shift supercell model is more recommended.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.cpc.2024.109294
A fundamental task in particle-in-cell (PIC) simulations of plasma physics is solving for charged particle motion in electromagnetic fields. This problem is especially challenging when the plasma is strongly magnetized due to numerical stiffness arising from the wide separation in time scales between highly oscillatory gyromotion and overall macroscopic behavior of the system. In contrast to conventional finite difference schemes, we investigated exponential integration techniques to numerically simulate strongly magnetized charged particle motion. Numerical experiments with a uniform magnetic field show that exponential integrators yield superior performance for linear problems (i.e. configurations with an electric field given by a quadratic electric scalar potential) and are competitive with conventional methods for nonlinear problems with cubic and quartic electric scalar potentials.
{"title":"Exploring exponential time integration for strongly magnetized charged particle motion","authors":"","doi":"10.1016/j.cpc.2024.109294","DOIUrl":"10.1016/j.cpc.2024.109294","url":null,"abstract":"<div><p>A fundamental task in particle-in-cell (PIC) simulations of plasma physics is solving for charged particle motion in electromagnetic fields. This problem is especially challenging when the plasma is strongly magnetized due to numerical stiffness arising from the wide separation in time scales between highly oscillatory gyromotion and overall macroscopic behavior of the system. In contrast to conventional finite difference schemes, we investigated exponential integration techniques to numerically simulate strongly magnetized charged particle motion. Numerical experiments with a uniform magnetic field show that exponential integrators yield superior performance for linear problems (i.e. configurations with an electric field given by a quadratic electric scalar potential) and are competitive with conventional methods for nonlinear problems with cubic and quartic electric scalar potentials.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0010465524002170/pdfft?md5=9f3b8ea83c053b8ae7f64c41736e3c0d&pid=1-s2.0-S0010465524002170-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141630688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.cpc.2024.109299
Sergei Iskakov , Alexander Hampel , Nils Wentzell , Emanuel Gull
We present the TRIQS/Nevanlinna analytic continuation package, an efficient implementation of the methods proposed by J. Fei et al. (2021) [53] and (2021) [55]. TRIQS/Nevanlinna strives to provide a high quality open source (distributed under the GNU General Public License version 3) alternative to the more widely adopted Maximum Entropy based analytic continuation programs. With the additional Hardy functions optimization procedure, it allows for an accurate resolution of wide band and sharp features in the spectral function. Those problems can be formulated in terms of imaginary time or Matsubara frequency response functions. The application is based on the TRIQS C++/Python framework, which allows for easy interoperability with other TRIQS-based applications, electronic band structure codes and visualization tools. Similar to other TRIQS packages, it comes with a convenient Python interface.
Program summary
Program Title:TRIQS/Nevanlinna
CPC Library link to program files:https://doi.org/10.17632/4cbzfy5rds.1
Nature of problem: Finite-temperature field theories are widely used to study quantum many-body effects and electronic structure of correlated materials. Obtaining physically relevant spectral functions from results in the imaginary time/Matsubara frequency domains requires solution of an ill-posed analytic continuation problem as a post-processing step.
Solution method: We present an efficient C++/Python open-source implementation of the Nevanlinna/Caratheodory analytic continuation.
{"title":"TRIQS/Nevanlinna: Implementation of the Nevanlinna Analytic Continuation method for noise-free data","authors":"Sergei Iskakov , Alexander Hampel , Nils Wentzell , Emanuel Gull","doi":"10.1016/j.cpc.2024.109299","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109299","url":null,"abstract":"<div><p>We present the <span>TRIQS</span>/<span>Nevanlinna</span> analytic continuation package, an efficient implementation of the methods proposed by J. Fei et al. (2021) <span>[53]</span> and (2021) <span>[55]</span>. <span>TRIQS</span>/<span>Nevanlinna</span> strives to provide a high quality open source (distributed under the GNU General Public License version 3) alternative to the more widely adopted Maximum Entropy based analytic continuation programs. With the additional Hardy functions optimization procedure, it allows for an accurate resolution of wide band and sharp features in the spectral function. Those problems can be formulated in terms of imaginary time or Matsubara frequency response functions. The application is based on the <span>TRIQS</span> C++/Python framework, which allows for easy interoperability with other <span>TRIQS</span>-based applications, electronic band structure codes and visualization tools. Similar to other <span>TRIQS</span> packages, it comes with a convenient Python interface.</p></div><div><h3>Program summary</h3><p><em>Program Title:</em> <span>TRIQS</span>/<span>Nevanlinna</span></p><p><em>CPC Library link to program files:</em> <span>https://doi.org/10.17632/4cbzfy5rds.1</span><svg><path></path></svg></p><p><em>Developer's repository link:</em> <span>https://github.com/TRIQS/Nevanlinna</span><svg><path></path></svg></p><p><em>Licensing provisions:</em> GPLv3</p><p><em>Programming language:</em> <span>C++</span>/<span>Python</span></p><p><em>External routines/libraries:</em> <span>TRIQS 3.2</span> <span>[1]</span>, <span>Boost >= 1.76.0</span>, <span>Eigen >= 3.4.0</span>, <span>cmake >= 3.20</span>.</p><p><em>Nature of problem:</em> Finite-temperature field theories are widely used to study quantum many-body effects and electronic structure of correlated materials. Obtaining physically relevant spectral functions from results in the imaginary time/Matsubara frequency domains requires solution of an ill-posed analytic continuation problem as a post-processing step.</p><p><em>Solution method:</em> We present an efficient C++/Python open-source implementation of the Nevanlinna/Caratheodory analytic continuation.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1016/j.cpc.2024.109296
Mei Zhang , Haijian Yang , Yong Liu , Rui Li
In reservoir simulation, the non-isothermal multiphase flow problem introduces the temperature variable to account for thermal effects, simultaneously posing challenges in efficiently solving the nonlinear systems for large-scale simulations. In this paper, we introduce and investigate a family of Schur-complement-based field-split algorithms designed for addressing non-isothermal multiphase flow problems, particularly those characterized by high heterogeneity. This algorithm involves decomposing a large system into smaller, more manageable sub-systems for solving non-isothermal multiphase flow problems with multiple physical fields, which enables parallel computation and makes it suitable for high-performance computing environments. Furthermore, a multilevel Schur-complement preconditioner, which involves applying the Schur-complement technique at each level of the hierarchy by capturing the coupling between different fields and physics, is proposed to enhance the efficiency and robustness of the parallel simulator. Large-scale simulations for both benchmark and realistic problems are conducted on a supercomputer, showcasing the method's efficacy in managing heat diffusion, significantly reducing linear iterations, and demonstrating a good parallel scalability.
{"title":"Multilevel Schur-complement algorithms for scalable parallel reservoir simulation with temperature variation","authors":"Mei Zhang , Haijian Yang , Yong Liu , Rui Li","doi":"10.1016/j.cpc.2024.109296","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109296","url":null,"abstract":"<div><p>In reservoir simulation, the non-isothermal multiphase flow problem introduces the temperature variable to account for thermal effects, simultaneously posing challenges in efficiently solving the nonlinear systems for large-scale simulations. In this paper, we introduce and investigate a family of Schur-complement-based field-split algorithms designed for addressing non-isothermal multiphase flow problems, particularly those characterized by high heterogeneity. This algorithm involves decomposing a large system into smaller, more manageable sub-systems for solving non-isothermal multiphase flow problems with multiple physical fields, which enables parallel computation and makes it suitable for high-performance computing environments. Furthermore, a multilevel Schur-complement preconditioner, which involves applying the Schur-complement technique at each level of the hierarchy by capturing the coupling between different fields and physics, is proposed to enhance the efficiency and robustness of the parallel simulator. Large-scale simulations for both benchmark and realistic problems are conducted on a supercomputer, showcasing the method's efficacy in managing heat diffusion, significantly reducing linear iterations, and demonstrating a good parallel scalability.</p></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1016/j.cpc.2024.109295
Hanghang Ma , Liwei Tan , Suming Weng , Wenjun Ying , Zhengming Sheng , Jie Zhang
Laser plasma instabilities (LPIs) have significant influences on the laser energy deposition efficiency and therefore are important processes in inertial confined fusion (ICF). Numerical simulations play important roles in revealing the complex physics of LPIs. Since LPIs are typically a three wave coupling process, the precise simulations of LPIs with kinetic effects require to resolve the laser period (around one femtosecond) and laser wavelength (less than one micron). In the typical ICF experiments, however, LPIs are involved in a spatial scale of several millimeters and a temporal scale of several nanoseconds. Therefore, the precise kinetic simulations of LPIs in such scales require huge computational resources and are hard to be carried out by present kinetic codes like particle-in-cell (PIC) codes. In this paper, a full wave fluid model of LPIs is constructed and numerically solved by the particle-mesh method, where the plasma is described by macro particles that can move across the mesh grids freely. Based upon this model, a two-dimensional (2D) GPU code named PM2D is developed. The PM2D code can simulate the kinetic effects of LPIs self-consistently as normal PIC codes. Moreover, as the physical model adopted in the PM2D code is specifically constructed for LPIs, the required macro particles per grid in the simulations can be largely reduced and thus overall simulation cost is considerably reduced comparing with PIC codes. More importantly, the numerical noise in the PM2D code is much lower, which makes it more robust than PIC codes in the simulation of LPIs for the long-time scale above 10 picoseconds. After the distributed computing is realized, our PM2D code is able to run on GPU clusters with a total mesh grids up to several billions, which meets the requirements for the simulations of LPIs at ICF experimental scale with reasonable cost.
Program summary
Program Title: PM2D
CPC Library link to program files:https://doi.org/10.17632/xscj6vnkkw.1
Licensing provisions: GNU General Public License v3.0.
Programming language: C++, CUDA.
Nature of problem: Although the large scale simulations of laser plasma instabilities (LPIs) is of great significance for the inertial confinement fusion (ICF), there is still no suitable code to simulate these problems. PM2D code based on a GPU platform provides an effective method to simulate these large scale problems in ICF.
Solution method: A fluid model for LPIs is established firstly, which contains wave equations that describe the laser propagating process, electron and ion fluid equations that describe the plasma motions, and a Poisson's equation that describes the electrostatic field induced by charge separation. The wave equation is solved on a rectangular region using absorption boundary conditions on all of four boundaries. The absorption boun
{"title":"PM2D: A parallel GPU-based code for the kinetic simulation of laser plasma instabilities at large scales","authors":"Hanghang Ma , Liwei Tan , Suming Weng , Wenjun Ying , Zhengming Sheng , Jie Zhang","doi":"10.1016/j.cpc.2024.109295","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109295","url":null,"abstract":"<div><p>Laser plasma instabilities (LPIs) have significant influences on the laser energy deposition efficiency and therefore are important processes in inertial confined fusion (ICF). Numerical simulations play important roles in revealing the complex physics of LPIs. Since LPIs are typically a three wave coupling process, the precise simulations of LPIs with kinetic effects require to resolve the laser period (around one femtosecond) and laser wavelength (less than one micron). In the typical ICF experiments, however, LPIs are involved in a spatial scale of several millimeters and a temporal scale of several nanoseconds. Therefore, the precise kinetic simulations of LPIs in such scales require huge computational resources and are hard to be carried out by present kinetic codes like particle-in-cell (PIC) codes. In this paper, a full wave fluid model of LPIs is constructed and numerically solved by the particle-mesh method, where the plasma is described by macro particles that can move across the mesh grids freely. Based upon this model, a two-dimensional (2D) GPU code named PM2D is developed. The PM2D code can simulate the kinetic effects of LPIs self-consistently as normal PIC codes. Moreover, as the physical model adopted in the PM2D code is specifically constructed for LPIs, the required macro particles per grid in the simulations can be largely reduced and thus overall simulation cost is considerably reduced comparing with PIC codes. More importantly, the numerical noise in the PM2D code is much lower, which makes it more robust than PIC codes in the simulation of LPIs for the long-time scale above 10 picoseconds. After the distributed computing is realized, our PM2D code is able to run on GPU clusters with a total mesh grids up to several billions, which meets the requirements for the simulations of LPIs at ICF experimental scale with reasonable cost.</p></div><div><h3>Program summary</h3><p><em>Program Title:</em> PM2D</p><p><em>CPC Library link to program files:</em> <span>https://doi.org/10.17632/xscj6vnkkw.1</span><svg><path></path></svg></p><p><em>Licensing provisions:</em> GNU General Public License v3.0.</p><p><em>Programming language:</em> C++, CUDA.</p><p><em>Nature of problem:</em> Although the large scale simulations of laser plasma instabilities (LPIs) is of great significance for the inertial confinement fusion (ICF), there is still no suitable code to simulate these problems. PM2D code based on a GPU platform provides an effective method to simulate these large scale problems in ICF.</p><p><em>Solution method:</em> A fluid model for LPIs is established firstly, which contains wave equations that describe the laser propagating process, electron and ion fluid equations that describe the plasma motions, and a Poisson's equation that describes the electrostatic field induced by charge separation. The wave equation is solved on a rectangular region using absorption boundary conditions on all of four boundaries. The absorption boun","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1016/j.cpc.2024.109297
Brandon L. Butler , Domagoj Fijan , Sharon C. Glotzer
Particle tracking is commonly used to study time-dependent behavior in many different types of physical and chemical systems involving constituents that span many length scales, including atoms, molecules, nanoparticles, granular particles, and even larger objects. Behaviors of interest studied using particle tracking information include disorder-order transitions, thermodynamic phase transitions, structural transitions, protein folding, crystallization, gelation, swarming, avalanches and fracture. A common challenge in studies of these systems involves change detection. Change point detection discerns when a temporal signal undergoes a change in distribution. These changes can be local or global, instantaneous or prolonged, obvious or subtle. Moreover, system-wide changes marking an interesting physical or chemical phenomenon (e.g. crystallization of a liquid) are often preceded by events (e.g. pre-nucleation clusters) that are localized and can occur anywhere at anytime in the system. For these reasons, detecting events in particle trajectories generated by molecular simulation is challenging and typically accomplished via ad hoc solutions unique to the behavior and system under study. Consequently, methods for event detection lack generality, and those used in one field are not easily used by scientists in other fields. Here we present a new Python-based tool, dupin, that allows for universal event detection from particle trajectory data irrespective of the system details. dupin works by creating a signal representing the simulation and partitioning the signal based on events (changes within the trajectory). This approach allows for studies where manual annotating of event boundaries would require a prohibitive amount of time. Furthermore, dupin can serve as a tool in automated and reproducible workflows. We demonstrate the application of dupin using three examples and discuss its applicability to a wider class of problems.
Program summary
Program Title:dupin
CPC Library link to program files:https://doi.org/10.17632/kjcn97zc46.1%
Nature of problem: In the field of molecular simulations, detecting structural transitions or events within trajectories can be both challenging and time-consuming for larger studies due to the requirement of a manual approach. This issue is particularly pronounced in studies involving hundreds or thousands of simulations, where manual detection and analysis of transitions become infeasible. Our goal is to develop an automated, accurate and efficient method for detecting transition points in simulation trajectories, which both
{"title":"Change point detection of events in molecular simulations using dupin","authors":"Brandon L. Butler , Domagoj Fijan , Sharon C. Glotzer","doi":"10.1016/j.cpc.2024.109297","DOIUrl":"https://doi.org/10.1016/j.cpc.2024.109297","url":null,"abstract":"<div><p>Particle tracking is commonly used to study time-dependent behavior in many different types of physical and chemical systems involving constituents that span many length scales, including atoms, molecules, nanoparticles, granular particles, and even larger objects. Behaviors of interest studied using particle tracking information include disorder-order transitions, thermodynamic phase transitions, structural transitions, protein folding, crystallization, gelation, swarming, avalanches and fracture. A common challenge in studies of these systems involves change detection. Change point detection discerns when a temporal signal undergoes a change in distribution. These changes can be local or global, instantaneous or prolonged, obvious or subtle. Moreover, system-wide changes marking an interesting physical or chemical phenomenon (e.g. crystallization of a liquid) are often preceded by events (e.g. pre-nucleation clusters) that are localized and can occur anywhere at anytime in the system. For these reasons, detecting events in particle trajectories generated by molecular simulation is challenging and typically accomplished via <em>ad hoc</em> solutions unique to the behavior and system under study. Consequently, methods for event detection lack generality, and those used in one field are not easily used by scientists in other fields. Here we present a new Python-based tool, <span>dupin</span>, that allows for universal event detection from particle trajectory data irrespective of the system details. <span>dupin</span> works by creating a signal representing the simulation and partitioning the signal based on events (changes within the trajectory). This approach allows for studies where manual annotating of event boundaries would require a prohibitive amount of time. Furthermore, <span>dupin</span> can serve as a tool in automated and reproducible workflows. We demonstrate the application of <span>dupin</span> using three examples and discuss its applicability to a wider class of problems.</p></div><div><h3>Program summary</h3><p><em>Program Title:</em> <span>dupin</span></p><p><em>CPC Library link to program files:</em> <span>https://doi.org/10.17632/kjcn97zc46.1</span><svg><path></path></svg>%</p><p><em>Developer's repository link::</em> <span>https://github.com/glotzerlab/dupin</span><svg><path></path></svg></p><p><em>Licensing provisions:</em> BSD 3-clause</p><p><em>Programming language:</em> Python</p><p><em>Nature of problem:</em> In the field of molecular simulations, detecting structural transitions or events within trajectories can be both challenging and time-consuming for larger studies due to the requirement of a manual approach. This issue is particularly pronounced in studies involving hundreds or thousands of simulations, where manual detection and analysis of transitions become infeasible. Our goal is to develop an automated, accurate and efficient method for detecting transition points in simulation trajectories, which both ","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}