SIAM Journal on Scientific Computing, Volume 46, Issue 4, Page A2123-A2149, August 2024. Abstract. We describe a “discretize-then-relax” strategy to globally minimize integral functionals over functions [math] in a Sobolev space subject to Dirichlet boundary conditions. The strategy applies whenever the integral functional depends polynomially on [math] and its derivatives, even if it is nonconvex. The “discretize” step uses a bounded finite element scheme to approximate the integral minimization problem with a convergent hierarchy of polynomial optimization problems over a compact feasible set, indexed by the decreasing size [math] of the finite element mesh. The “relax” step employs sparse moment-sum-of-squares relaxations to approximate each polynomial optimization problem with a hierarchy of convex semidefinite programs, indexed by an increasing relaxation order [math]. We prove that, as [math] and [math], solutions of such semidefinite programs provide approximate minimizers that converge in a suitable sense (including in certain [math] norms) to the global minimizer of the original integral functional if it is unique. We also report computational experiments showing that our numerical strategy works well even when technical conditions required by our theoretical analysis are not satisfied.
{"title":"Global Minimization of Polynomial Integral Functionals","authors":"Giovanni Fantuzzi, Federico Fuentes","doi":"10.1137/23m1592584","DOIUrl":"https://doi.org/10.1137/23m1592584","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 4, Page A2123-A2149, August 2024. <br/> Abstract. We describe a “discretize-then-relax” strategy to globally minimize integral functionals over functions [math] in a Sobolev space subject to Dirichlet boundary conditions. The strategy applies whenever the integral functional depends polynomially on [math] and its derivatives, even if it is nonconvex. The “discretize” step uses a bounded finite element scheme to approximate the integral minimization problem with a convergent hierarchy of polynomial optimization problems over a compact feasible set, indexed by the decreasing size [math] of the finite element mesh. The “relax” step employs sparse moment-sum-of-squares relaxations to approximate each polynomial optimization problem with a hierarchy of convex semidefinite programs, indexed by an increasing relaxation order [math]. We prove that, as [math] and [math], solutions of such semidefinite programs provide approximate minimizers that converge in a suitable sense (including in certain [math] norms) to the global minimizer of the original integral functional if it is unique. We also report computational experiments showing that our numerical strategy works well even when technical conditions required by our theoretical analysis are not satisfied.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Scientific Computing, Volume 46, Issue 4, Page C273-C296, August 2024. Abstract. We present a novel training method for deep operator networks (DeepONets), one of the most popular neural network models for operators. DeepONets are constructed by two subnetworks, namely the branch and trunk networks. Typically, the two subnetworks are trained simultaneously, which amounts to solving a complex optimization problem in a high dimensional space. In addition, the nonconvex and nonlinear nature makes training very challenging. To tackle such a challenge, we propose a two-step training method that trains the trunk network first and then sequentially trains the branch network. The core mechanism is motivated by the divide-and-conquer paradigm and is the decomposition of the entire complex training task into two subtasks with reduced complexity. Therein the Gram–Schmidt orthonormalization process is introduced which significantly improves stability and generalization ability. On the theoretical side, we establish a generalization error estimate in terms of the number of training data, the width of DeepONets, and the number of input and output sensors. Numerical examples are presented to demonstrate the effectiveness of the two-step training method, including Darcy flow in heterogeneous porous media.
{"title":"On the Training and Generalization of Deep Operator Networks","authors":"Sanghyun Lee, Yeonjong Shin","doi":"10.1137/23m1598751","DOIUrl":"https://doi.org/10.1137/23m1598751","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 4, Page C273-C296, August 2024. <br/> Abstract. We present a novel training method for deep operator networks (DeepONets), one of the most popular neural network models for operators. DeepONets are constructed by two subnetworks, namely the branch and trunk networks. Typically, the two subnetworks are trained simultaneously, which amounts to solving a complex optimization problem in a high dimensional space. In addition, the nonconvex and nonlinear nature makes training very challenging. To tackle such a challenge, we propose a two-step training method that trains the trunk network first and then sequentially trains the branch network. The core mechanism is motivated by the divide-and-conquer paradigm and is the decomposition of the entire complex training task into two subtasks with reduced complexity. Therein the Gram–Schmidt orthonormalization process is introduced which significantly improves stability and generalization ability. On the theoretical side, we establish a generalization error estimate in terms of the number of training data, the width of DeepONets, and the number of input and output sensors. Numerical examples are presented to demonstrate the effectiveness of the two-step training method, including Darcy flow in heterogeneous porous media.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2073-A2100, June 2024. Abstract. We propose efficient and parallel algorithms for the implementation of the high-order continuous time Galerkin method for dissipative and wave propagation problems. By using Legendre polynomials as shape functions, we obtain a special structure of the stiffness matrix that allows us to extend the diagonal Padé approximation to solve ordinary differential equations with source terms. The unconditional stability, [math] error estimates, and [math] superconvergence at the nodes of the continuous time Galerkin method are proved. Numerical examples confirm our theoretical results.
{"title":"Efficient and Parallel Solution of High-Order Continuous Time Galerkin for Dissipative and Wave Propagation Problems","authors":"Zhiming Chen, Yong Liu","doi":"10.1137/23m1572787","DOIUrl":"https://doi.org/10.1137/23m1572787","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2073-A2100, June 2024. <br/> Abstract. We propose efficient and parallel algorithms for the implementation of the high-order continuous time Galerkin method for dissipative and wave propagation problems. By using Legendre polynomials as shape functions, we obtain a special structure of the stiffness matrix that allows us to extend the diagonal Padé approximation to solve ordinary differential equations with source terms. The unconditional stability, [math] error estimates, and [math] superconvergence at the nodes of the continuous time Galerkin method are proved. Numerical examples confirm our theoretical results.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page B354-B376, June 2024. Abstract. Developing large-scale distributed methods that are robust to the presence of adversarial or corrupted workers is an important part of making such methods practical for real-world problems. In this paper, we propose an iterative approach that is adversary-tolerant for convex optimization problems. By leveraging simple statistics, our method ensures convergence and is capable of adapting to adversarial distributions. Additionally, the efficiency of the proposed methods for solving convex problems is shown in simulations with the presence of adversaries. Through simulations, we demonstrate the efficiency of our approach in the presence of adversaries and its ability to identify adversarial workers with high accuracy and tolerate varying levels of adversary rates.
{"title":"Randomized Kaczmarz in Adversarial Distributed Setting","authors":"Longxiu Huang, Xia Li, Deanna Needell","doi":"10.1137/23m1554357","DOIUrl":"https://doi.org/10.1137/23m1554357","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page B354-B376, June 2024. <br/> Abstract. Developing large-scale distributed methods that are robust to the presence of adversarial or corrupted workers is an important part of making such methods practical for real-world problems. In this paper, we propose an iterative approach that is adversary-tolerant for convex optimization problems. By leveraging simple statistics, our method ensures convergence and is capable of adapting to adversarial distributions. Additionally, the efficiency of the proposed methods for solving convex problems is shown in simulations with the presence of adversaries. Through simulations, we demonstrate the efficiency of our approach in the presence of adversaries and its ability to identify adversarial workers with high accuracy and tolerate varying levels of adversary rates.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2101-A2122, June 2024. Abstract. In this paper, we introduce the ensemble Kalman–Stein gradient descent (EnKSGD) class of algorithms. The EnKSGD class of algorithms builds on the ensemble Kalman filter (EnKF) line of work, applying techniques from sequential data assimilation to unconstrained optimization and parameter estimation problems. An essential idea is to exploit the EnKF as a black box (i.e., derivative-free, zeroth order) optimization tool if iterated to convergence. In this paper, we return to the foundations of the EnKF as a sequential data assimilation technique, including its continuous-time and mean-field limits, with the goal of developing faster optimization algorithms suited to noisy black box optimization and inverse problems. The resulting EnKSGD class of algorithms can be designed to both maintain the desirable property of affine-invariance and employ the well-known backtracking line search. Furthermore, EnKSGD algorithms are designed to not necessitate the subspace restriction property and to avoid the variance collapse property of previous iterated EnKF approaches to optimization, as both these properties can be undesirable in an optimization context. EnKSGD also generalizes beyond the [math] loss and is thus applicable to a wider class of problems than the standard EnKF. Numerical experiments with empirical risk minimization type problems, including both linear and nonlinear least squares problems, as well as maximum likelihood estimation, demonstrate the faster empirical convergence of EnKSGD relative to alternative EnKF approaches to optimization. Reproducibility of computational results. This paper has been awarded the “SIAM Reproducibility Badge: Code and Data Available” as a recognition that the authors have followed reproducibility principles valued by SISC and the scientific computing community. Code and data that allow readers to reproduce the results in this paper are available at https://github.com/0x4249/EnKSGD and in the supplementary material (M156114_Supplementary_Materials.zip [106KB]).
{"title":"EnKSGD: A Class of Preconditioned Black Box Optimization and Inversion Algorithms","authors":"Brian Irwin, Sebastian Reich","doi":"10.1137/23m1561142","DOIUrl":"https://doi.org/10.1137/23m1561142","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2101-A2122, June 2024. <br/> Abstract. In this paper, we introduce the ensemble Kalman–Stein gradient descent (EnKSGD) class of algorithms. The EnKSGD class of algorithms builds on the ensemble Kalman filter (EnKF) line of work, applying techniques from sequential data assimilation to unconstrained optimization and parameter estimation problems. An essential idea is to exploit the EnKF as a black box (i.e., derivative-free, zeroth order) optimization tool if iterated to convergence. In this paper, we return to the foundations of the EnKF as a sequential data assimilation technique, including its continuous-time and mean-field limits, with the goal of developing faster optimization algorithms suited to noisy black box optimization and inverse problems. The resulting EnKSGD class of algorithms can be designed to both maintain the desirable property of affine-invariance and employ the well-known backtracking line search. Furthermore, EnKSGD algorithms are designed to not necessitate the subspace restriction property and to avoid the variance collapse property of previous iterated EnKF approaches to optimization, as both these properties can be undesirable in an optimization context. EnKSGD also generalizes beyond the [math] loss and is thus applicable to a wider class of problems than the standard EnKF. Numerical experiments with empirical risk minimization type problems, including both linear and nonlinear least squares problems, as well as maximum likelihood estimation, demonstrate the faster empirical convergence of EnKSGD relative to alternative EnKF approaches to optimization. Reproducibility of computational results. This paper has been awarded the “SIAM Reproducibility Badge: Code and Data Available” as a recognition that the authors have followed reproducibility principles valued by SISC and the scientific computing community. Code and data that allow readers to reproduce the results in this paper are available at https://github.com/0x4249/EnKSGD and in the supplementary material (M156114_Supplementary_Materials.zip [106KB]).","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2047-A2072, June 2024. Abstract. We analyze the recently introduced family of preconditioners in [M. M. Rana et al., SIAM J. Sci. Comput., 43 (2021), pp. S475–S495] for the stage equations of implicit Runge–Kutta methods for [math]-stage methods. We simplify the formulas for the eigenvalues and eigenvectors of the preconditioned systems for a general [math]-stage method and use these to obtain convergence rate estimates for preconditioned GMRES for some common choices of the implicit Runge–Kutta methods. This analysis is based on understanding the inherent matrix structure of these problems and exploiting it to qualitatively predict and explain the main observed features of the GMRES convergence behavior, using tools from approximation and potential theory based on Schwarz–Christoffel maps for curves and close, connected domains in the complex plane. We illustrate our analysis with numerical experiments showing very close correspondence of the estimates and the observed behavior, suggesting the analysis reliably captures the essence of these preconditioners.
{"title":"Spectral Analysis of Implicit [math]-Stage Block Runge–Kutta Preconditioners","authors":"Martin J. Gander, Michal Outrata","doi":"10.1137/23m1604266","DOIUrl":"https://doi.org/10.1137/23m1604266","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2047-A2072, June 2024. <br/> Abstract. We analyze the recently introduced family of preconditioners in [M. M. Rana et al., SIAM J. Sci. Comput., 43 (2021), pp. S475–S495] for the stage equations of implicit Runge–Kutta methods for [math]-stage methods. We simplify the formulas for the eigenvalues and eigenvectors of the preconditioned systems for a general [math]-stage method and use these to obtain convergence rate estimates for preconditioned GMRES for some common choices of the implicit Runge–Kutta methods. This analysis is based on understanding the inherent matrix structure of these problems and exploiting it to qualitatively predict and explain the main observed features of the GMRES convergence behavior, using tools from approximation and potential theory based on Schwarz–Christoffel maps for curves and close, connected domains in the complex plane. We illustrate our analysis with numerical experiments showing very close correspondence of the estimates and the observed behavior, suggesting the analysis reliably captures the essence of these preconditioners.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyun Tang, Kim-Chuan Toh, Nachuan Xiao, Yinyu Ye
SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2025-A2046, June 2024. Abstract. In this paper, we propose a cubic-regularized Riemannian optimization method (RDRSOM), which partially exploits the second-order information and achieves the iteration complexity of [math]. In order to reduce the per-iteration computational cost, we further propose a practical version of RDRSOM which is an extension of the well-known Barzilai–Borwein method, which enjoys the worst-case iteration complexity of [math]. Moreover, under more stringent conditions, RDRSOM achieves the iteration complexity of [math]. We apply our method to solve a nonlinear formulation of the wireless sensor network localization problem whose feasible set is a Riemannian manifold that has not been considered in the literature before. Numerical experiments are conducted to verify the high efficiency of our algorithm compared to state-of-the-art Riemannian optimization methods and other nonlinear solvers.
{"title":"A Riemannian Dimension-Reduced Second-Order Method with Application in Sensor Network Localization","authors":"Tianyun Tang, Kim-Chuan Toh, Nachuan Xiao, Yinyu Ye","doi":"10.1137/23m1567229","DOIUrl":"https://doi.org/10.1137/23m1567229","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A2025-A2046, June 2024. <br/> Abstract. In this paper, we propose a cubic-regularized Riemannian optimization method (RDRSOM), which partially exploits the second-order information and achieves the iteration complexity of [math]. In order to reduce the per-iteration computational cost, we further propose a practical version of RDRSOM which is an extension of the well-known Barzilai–Borwein method, which enjoys the worst-case iteration complexity of [math]. Moreover, under more stringent conditions, RDRSOM achieves the iteration complexity of [math]. We apply our method to solve a nonlinear formulation of the wireless sensor network localization problem whose feasible set is a Riemannian manifold that has not been considered in the literature before. Numerical experiments are conducted to verify the high efficiency of our algorithm compared to state-of-the-art Riemannian optimization methods and other nonlinear solvers.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alina Chertock, Alexander Kurganov, Michael Redle, Kailiang Wu
SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A1998-A2024, June 2024. Abstract. We develop a new second-order unstaggered semidiscrete path-conservative central-upwind (PCCU) scheme for ideal and shallow water magnetohydrodynamics (MHD) equations. The new scheme possesses several important properties: it locally preserves the divergence-free constraint, it does not rely on any (approximate) Riemann problem solver, and it robustly produces high-resolution and nonoscillatory results. The derivation of the scheme is based on the Godunov–Powell nonconservative modifications of the studied MHD systems. The local divergence-free property is enforced by augmenting the modified systems with the evolution equations for the corresponding derivatives of the magnetic field components. These derivatives are then used to design a special piecewise linear reconstruction of the magnetic field, which guarantees a nonoscillatory nature of the resulting scheme. In addition, the proposed PCCU discretization accounts for the jump of the nonconservative product terms across cell interfaces, thereby ensuring stability. We test the proposed PCCU scheme on several benchmarks for both ideal and shallow water MHD systems. The obtained numerical results illustrate the performance of the new scheme, its robustness, and its ability not only to achieve high resolution, but also to preserve the positivity of computed quantities such as density, pressure, and water depth.
{"title":"A New Locally Divergence-Free Path-Conservative Central-Upwind Scheme for Ideal and Shallow Water Magnetohydrodynamics","authors":"Alina Chertock, Alexander Kurganov, Michael Redle, Kailiang Wu","doi":"10.1137/22m1539009","DOIUrl":"https://doi.org/10.1137/22m1539009","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A1998-A2024, June 2024. <br/> Abstract. We develop a new second-order unstaggered semidiscrete path-conservative central-upwind (PCCU) scheme for ideal and shallow water magnetohydrodynamics (MHD) equations. The new scheme possesses several important properties: it locally preserves the divergence-free constraint, it does not rely on any (approximate) Riemann problem solver, and it robustly produces high-resolution and nonoscillatory results. The derivation of the scheme is based on the Godunov–Powell nonconservative modifications of the studied MHD systems. The local divergence-free property is enforced by augmenting the modified systems with the evolution equations for the corresponding derivatives of the magnetic field components. These derivatives are then used to design a special piecewise linear reconstruction of the magnetic field, which guarantees a nonoscillatory nature of the resulting scheme. In addition, the proposed PCCU discretization accounts for the jump of the nonconservative product terms across cell interfaces, thereby ensuring stability. We test the proposed PCCU scheme on several benchmarks for both ideal and shallow water MHD systems. The obtained numerical results illustrate the performance of the new scheme, its robustness, and its ability not only to achieve high resolution, but also to preserve the positivity of computed quantities such as density, pressure, and water depth.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A1972-A1997, June 2024. Abstract. We present the numerical flow iteration (NuFI) for solving the Vlasov–Poisson equation. In a certain sense specified later herein, NuFI provides infinite resolution of the distribution function. NuFI exactly preserves positivity, all [math]-norms, charge, and entropy. Numerical experiments show no energy drift. Furthermore NuFI requires several orders of magnitude less memory than conventional approaches, and can very efficiently be parallelized on GPU clusters. Low fidelity simulations provide good qualitative results for extended periods of time and can be computed on low-cost workstations.
{"title":"The Numerical Flow Iteration for the Vlasov–Poisson Equation","authors":"Matthias Kirchhart, R. Paul Wilhelm","doi":"10.1137/23m154710x","DOIUrl":"https://doi.org/10.1137/23m154710x","url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 3, Page A1972-A1997, June 2024. <br/> Abstract. We present the numerical flow iteration (NuFI) for solving the Vlasov–Poisson equation. In a certain sense specified later herein, NuFI provides infinite resolution of the distribution function. NuFI exactly preserves positivity, all [math]-norms, charge, and entropy. Numerical experiments show no energy drift. Furthermore NuFI requires several orders of magnitude less memory than conventional approaches, and can very efficiently be parallelized on GPU clusters. Low fidelity simulations provide good qualitative results for extended periods of time and can be computed on low-cost workstations.","PeriodicalId":49526,"journal":{"name":"SIAM Journal on Scientific Computing","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}