Abstract In a recent paper, the author examined a correlation affinity measure for selecting the coarse degrees of freedom (CDOFs) or coarse nodes (C nodes) in systems of elliptic partial differential equations (PDEs). This measure was applied to a set of relaxed vectors, which exposed the near‐nullspace components of the PDE operator. Selecting the CDOFs using this affinity measure and constructing the interpolation operators using a least‐squares procedure, an algebraic multigrid (AMG) method was developed. However, there are several noted issues with this AMG solver. First, to capture strong anisotropies, a large number of test vectors may be needed; and second, the solver's performance can be sensitive to the initial set of random test vectors. Both issues reflect the sensitive statistical nature of the measure. In this article, we derive several other statistical measures that ameliorate these issues and lead to better AMG performance. These measures are related to a Markov process, which the PDE itself may model. Specifically, the measures are based on the diffusion distance/effective resistance for such process, and hence, these measures incorporate physics into the CDOF selection. Moreover, because the diffusion distance/effective resistance can be used to analyze graph networks, these measures also provide a very economical scheme for analyzing large‐scale networks. In this article, the derivations of these measures are given, and numerical experiments for analyzing networks and for AMG performance on weighted‐graph Laplacians and systems of elliptic boundary‐value problems are presented.
{"title":"Bringing physics into the coarse‐grid selection: Approximate diffusion distance/effective resistance measures for network analysis and algebraic multigrid for graph Laplacians and systems of elliptic partial differential equations","authors":"Barry Lee","doi":"10.1002/nla.2539","DOIUrl":"https://doi.org/10.1002/nla.2539","url":null,"abstract":"Abstract In a recent paper, the author examined a correlation affinity measure for selecting the coarse degrees of freedom (CDOFs) or coarse nodes (C nodes) in systems of elliptic partial differential equations (PDEs). This measure was applied to a set of relaxed vectors, which exposed the near‐nullspace components of the PDE operator. Selecting the CDOFs using this affinity measure and constructing the interpolation operators using a least‐squares procedure, an algebraic multigrid (AMG) method was developed. However, there are several noted issues with this AMG solver. First, to capture strong anisotropies, a large number of test vectors may be needed; and second, the solver's performance can be sensitive to the initial set of random test vectors. Both issues reflect the sensitive statistical nature of the measure. In this article, we derive several other statistical measures that ameliorate these issues and lead to better AMG performance. These measures are related to a Markov process, which the PDE itself may model. Specifically, the measures are based on the diffusion distance/effective resistance for such process, and hence, these measures incorporate physics into the CDOF selection. Moreover, because the diffusion distance/effective resistance can be used to analyze graph networks, these measures also provide a very economical scheme for analyzing large‐scale networks. In this article, the derivations of these measures are given, and numerical experiments for analyzing networks and for AMG performance on weighted‐graph Laplacians and systems of elliptic boundary‐value problems are presented.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"23 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135974107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract When solving shifted linear systems using shifted Krylov subspace methods, selecting a seed system is necessary, and an unsuitable seed may result in many shifted systems being unsolved. To avoid this problem, a seed‐switching technique has been proposed to help switch the seed system to another linear system as a new seed system without losing the dimension of the constructed Krylov subspace. Nevertheless, this technique requires collinear residual vectors when applying Krylov subspace methods to the seed and shifted systems. Since the product‐type shifted Krylov subspace methods cannot provide such collinearity, these methods cannot use this technique. In this article, we propose a variant of the shifted BiCGstab method, which possesses the collinearity of residuals, and apply the seed‐switching technique to it. Some numerical experiments show that the problem of choosing the initial seed system is circumvented.
{"title":"Shifted LOPBiCG: A locally orthogonal product‐type method for solving nonsymmetric shifted linear systems based on Bi‐CGSTAB","authors":"Ren‐Jie Zhao, Tomohiro Sogabe, Tomoya Kemmochi, Shao‐Liang Zhang","doi":"10.1002/nla.2538","DOIUrl":"https://doi.org/10.1002/nla.2538","url":null,"abstract":"Abstract When solving shifted linear systems using shifted Krylov subspace methods, selecting a seed system is necessary, and an unsuitable seed may result in many shifted systems being unsolved. To avoid this problem, a seed‐switching technique has been proposed to help switch the seed system to another linear system as a new seed system without losing the dimension of the constructed Krylov subspace. Nevertheless, this technique requires collinear residual vectors when applying Krylov subspace methods to the seed and shifted systems. Since the product‐type shifted Krylov subspace methods cannot provide such collinearity, these methods cannot use this technique. In this article, we propose a variant of the shifted BiCGstab method, which possesses the collinearity of residuals, and apply the seed‐switching technique to it. Some numerical experiments show that the problem of choosing the initial seed system is circumvented.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"376 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136069717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract When dealing with the discretization of differential equations on non‐rectangular domains, a careful treatment of the boundary is mandatory and may result in implementation difficulties and in coefficient matrices without a prescribed structure. Here we examine the numerical solution of a two‐dimensional constant coefficient distributed‐order space‐fractional diffusion equation with a nonlinear term on a convex domain. To avoid the aforementioned inconvenience, we resort to the volume‐penalization method, which consists of embedding the domain into a rectangle and in adding a reaction penalization term to the original equation that dominates in the region outside the original domain and annihilates the solution correspondingly. Thanks to the volume‐penalization, methods designed for problems in rectangular domains are available for those in convex domains and by applying an implicit finite difference scheme we obtain coefficient matrices with a 2‐level Toeplitz structure plus a diagonal matrix which arises from the penalty term. As a consequence of the latter, we can describe the asymptotic eigenvalue distribution as the matrix size diverges as well as estimate the intrinsic asymptotic ill‐conditioning of the involved matrices. On these bases, we discuss the performances of the conjugate gradient with circulant and ‐preconditioners and of the generalized minimal residual with split circulant and ‐preconditioners and conduct related numerical experiments.
{"title":"Algebra preconditionings for 2D Riesz distributed‐order space‐fractional diffusion equations on convex domains","authors":"Mariarosa Mazza, Stefano Serra‐Capizzano, Rosita Luisa Sormani","doi":"10.1002/nla.2536","DOIUrl":"https://doi.org/10.1002/nla.2536","url":null,"abstract":"Abstract When dealing with the discretization of differential equations on non‐rectangular domains, a careful treatment of the boundary is mandatory and may result in implementation difficulties and in coefficient matrices without a prescribed structure. Here we examine the numerical solution of a two‐dimensional constant coefficient distributed‐order space‐fractional diffusion equation with a nonlinear term on a convex domain. To avoid the aforementioned inconvenience, we resort to the volume‐penalization method, which consists of embedding the domain into a rectangle and in adding a reaction penalization term to the original equation that dominates in the region outside the original domain and annihilates the solution correspondingly. Thanks to the volume‐penalization, methods designed for problems in rectangular domains are available for those in convex domains and by applying an implicit finite difference scheme we obtain coefficient matrices with a 2‐level Toeplitz structure plus a diagonal matrix which arises from the penalty term. As a consequence of the latter, we can describe the asymptotic eigenvalue distribution as the matrix size diverges as well as estimate the intrinsic asymptotic ill‐conditioning of the involved matrices. On these bases, we discuss the performances of the conjugate gradient with circulant and ‐preconditioners and of the generalized minimal residual with split circulant and ‐preconditioners and conduct related numerical experiments.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"8 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135405491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary Quasi‐Newton iterations are constructed for the finite element solution of small‐strain nonlinear elasticity systems in 3D. The linearizations are based on spectral equivalence and hence considered as variable preconditioners arising from proper simplifications in the differential operator. Convergence is proved, providing bounds uniformly w.r.t. the FEM discretization. Convenient iterative solvers for linearized systems are also proposed. Numerical experiments in 3D confirm that the suggested quasi‐Newton methods are competitive with Newton's method.
{"title":"Quasi‐Newton variable preconditioning for nonlinear elasticity systems in 3D","authors":"J. Karátson, S. Sysala, M. Béreš","doi":"10.1002/nla.2537","DOIUrl":"https://doi.org/10.1002/nla.2537","url":null,"abstract":"Summary Quasi‐Newton iterations are constructed for the finite element solution of small‐strain nonlinear elasticity systems in 3D. The linearizations are based on spectral equivalence and hence considered as variable preconditioners arising from proper simplifications in the differential operator. Convergence is proved, providing bounds uniformly w.r.t. the FEM discretization. Convenient iterative solvers for linearized systems are also proposed. Numerical experiments in 3D confirm that the suggested quasi‐Newton methods are competitive with Newton's method.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"84 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135367516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The need to know a few singular triplets associated with the largest singular values of a third‐order tensor arises in data compression and extraction. This paper describes a new method for their computation using the t‐product. Methods for determining a couple of singular triplets associated with the smallest singular values also are presented. The proposed methods generalize available restarted Lanczos bidiagonalization methods for computing a few of the largest or smallest singular triplets of a matrix. The methods of this paper use Ritz and harmonic Ritz lateral slices to determine accurate approximations of the largest and smallest singular triplets, respectively. Computed examples show applications to data compression and face recognition.
{"title":"A tensor bidiagonalization method for higher‐order singular value decomposition with applications","authors":"A. El Hachimi, K. Jbilou, A. Ratnani, L. Reichel","doi":"10.1002/nla.2530","DOIUrl":"https://doi.org/10.1002/nla.2530","url":null,"abstract":"Abstract The need to know a few singular triplets associated with the largest singular values of a third‐order tensor arises in data compression and extraction. This paper describes a new method for their computation using the t‐product. Methods for determining a couple of singular triplets associated with the smallest singular values also are presented. The proposed methods generalize available restarted Lanczos bidiagonalization methods for computing a few of the largest or smallest singular triplets of a matrix. The methods of this paper use Ritz and harmonic Ritz lateral slices to determine accurate approximations of the largest and smallest singular triplets, respectively. Computed examples show applications to data compression and face recognition.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this article, we propose a novel alternating minimization scheme for finding completely positive factorizations. In each iteration, our method splits the original factorization problem into two optimization subproblems, the first one being a orthogonal procrustes problem, which is taken over the orthogoal group, and the second one over the set of entrywise positive matrices. We present both a convergence analysis of the method and favorable numerical results.
{"title":"Computing the completely positive factorization via alternating minimization","authors":"R. Behling, H. Lara, H. Oviedo","doi":"10.1002/nla.2535","DOIUrl":"https://doi.org/10.1002/nla.2535","url":null,"abstract":"Abstract In this article, we propose a novel alternating minimization scheme for finding completely positive factorizations. In each iteration, our method splits the original factorization problem into two optimization subproblems, the first one being a orthogonal procrustes problem, which is taken over the orthogoal group, and the second one over the set of entrywise positive matrices. We present both a convergence analysis of the method and favorable numerical results.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135425698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaerdan Shataer, Amos S. Lawless, Nancy K. Nichols
Abstract In variational assimilation, the most probable state of a dynamical system under Gaussian assumptions for the prior and likelihood can be found by solving a least‐squares minimization problem. In recent years, we have seen the popularity of hybrid variational data assimilation methods for Numerical Weather Prediction. In these methods, the prior error covariance matrix is a weighted sum of a climatological part and a flow‐dependent ensemble part, the latter being rank deficient. The nonlinear least squares problem of variational data assimilation is solved using iterative numerical methods, and the condition number of the Hessian is a good proxy for the convergence behavior of such methods. In this article, we study the conditioning of the least squares problem in a hybrid four‐dimensional variational data assimilation (Hybrid 4D‐Var) scheme by establishing bounds on the condition number of the Hessian. In particular, we consider the effect of the ensemble component of the prior covariance on the conditioning of the system. Numerical experiments show that the bounds obtained can be useful in predicting the behavior of the true condition number and the convergence speed of an iterative algorithm
{"title":"Conditioning of hybrid variational data assimilation","authors":"Shaerdan Shataer, Amos S. Lawless, Nancy K. Nichols","doi":"10.1002/nla.2534","DOIUrl":"https://doi.org/10.1002/nla.2534","url":null,"abstract":"Abstract In variational assimilation, the most probable state of a dynamical system under Gaussian assumptions for the prior and likelihood can be found by solving a least‐squares minimization problem. In recent years, we have seen the popularity of hybrid variational data assimilation methods for Numerical Weather Prediction. In these methods, the prior error covariance matrix is a weighted sum of a climatological part and a flow‐dependent ensemble part, the latter being rank deficient. The nonlinear least squares problem of variational data assimilation is solved using iterative numerical methods, and the condition number of the Hessian is a good proxy for the convergence behavior of such methods. In this article, we study the conditioning of the least squares problem in a hybrid four‐dimensional variational data assimilation (Hybrid 4D‐Var) scheme by establishing bounds on the condition number of the Hessian. In particular, we consider the effect of the ensemble component of the prior covariance on the conditioning of the system. Numerical experiments show that the bounds obtained can be useful in predicting the behavior of the true condition number and the convergence speed of an iterative algorithm","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134958103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Numerous attempts have been made to develop efficient methods for solving the system of constrained nonlinear equations due to its widespread use in diverse engineering applications. In this article, we present a family of inertial‐based derivative‐free projection methods with a correction step for solving such system, in which the selection of the derivative‐free search direction is flexible. This family does not require the computation of corresponding Jacobian matrix or approximate matrix at every iteration and possess the following theoretical properties: (i) the inertial‐based corrected direction framework always automatically satisfies the sufficient descent and trust region properties without specific search directions, and is independent of any line search; (ii) the global convergence of the proposed family is proven under a weaker monotonicity condition on the mapping , without the typical monotonicity or pseudo‐monotonicity assumption; (iii) the results about convergence rate of the proposed family are established under slightly stronger assumptions. Furthermore, we propose two effective inertial‐based derivative‐free projection methods, each embedding a specific search direction into the proposed family. We present preliminary numerical experiments on certain test problems to demonstrate the effectiveness and superiority of the proposed methods in comparison with existing ones. Additionally, we utilize these methods for solving sparse signal restorations and image restorations in compressive sensing applications.
{"title":"A family of inertial‐based derivative‐free projection methods with a correction step for constrained nonlinear equations and their applications","authors":"Pengjie Liu, Hu Shao, Zihang Yuan, Jianhao Zhou","doi":"10.1002/nla.2533","DOIUrl":"https://doi.org/10.1002/nla.2533","url":null,"abstract":"Abstract Numerous attempts have been made to develop efficient methods for solving the system of constrained nonlinear equations due to its widespread use in diverse engineering applications. In this article, we present a family of inertial‐based derivative‐free projection methods with a correction step for solving such system, in which the selection of the derivative‐free search direction is flexible. This family does not require the computation of corresponding Jacobian matrix or approximate matrix at every iteration and possess the following theoretical properties: (i) the inertial‐based corrected direction framework always automatically satisfies the sufficient descent and trust region properties without specific search directions, and is independent of any line search; (ii) the global convergence of the proposed family is proven under a weaker monotonicity condition on the mapping , without the typical monotonicity or pseudo‐monotonicity assumption; (iii) the results about convergence rate of the proposed family are established under slightly stronger assumptions. Furthermore, we propose two effective inertial‐based derivative‐free projection methods, each embedding a specific search direction into the proposed family. We present preliminary numerical experiments on certain test problems to demonstrate the effectiveness and superiority of the proposed methods in comparison with existing ones. Additionally, we utilize these methods for solving sparse signal restorations and image restorations in compressive sensing applications.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136060602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Can one recover a matrix efficiently from only matrix‐vector products? If so, how many are needed? This article describes algorithms to recover matrices with known structures, such as tridiagonal, Toeplitz, Toeplitz‐like, and hierarchical low‐rank, from matrix‐vector products. In particular, we derive a randomized algorithm for recovering an unknown hierarchical low‐rank matrix from only matrix‐vector products with high probability, where is the rank of the off‐diagonal blocks, and is a small oversampling parameter. We do this by carefully constructing randomized input vectors for our matrix‐vector products that exploit the hierarchical structure of the matrix. While existing algorithms for hierarchical matrix recovery use a recursive “peeling” procedure based on elimination, our approach uses a recursive projection procedure.
{"title":"Structured matrix recovery from matrix‐vector products","authors":"Diana Halikias, Alex Townsend","doi":"10.1002/nla.2531","DOIUrl":"https://doi.org/10.1002/nla.2531","url":null,"abstract":"Abstract Can one recover a matrix efficiently from only matrix‐vector products? If so, how many are needed? This article describes algorithms to recover matrices with known structures, such as tridiagonal, Toeplitz, Toeplitz‐like, and hierarchical low‐rank, from matrix‐vector products. In particular, we derive a randomized algorithm for recovering an unknown hierarchical low‐rank matrix from only matrix‐vector products with high probability, where is the rank of the off‐diagonal blocks, and is a small oversampling parameter. We do this by carefully constructing randomized input vectors for our matrix‐vector products that exploit the hierarchical structure of the matrix. While existing algorithms for hierarchical matrix recovery use a recursive “peeling” procedure based on elimination, our approach uses a recursive projection procedure.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136060044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Fully implicit Runge–Kutta methods offer the possibility to use high order accurate time discretization to match space discretization accuracy, an issue of significant importance for many large scale problems of current interest, where we may have fine space resolution with many millions of spatial degrees of freedom and long time intervals. In this work, we consider strongly A‐stable implicit Runge–Kutta methods of arbitrary order of accuracy, based on Radau quadratures. For the arising large algebraic systems we introduce efficient preconditioners, that (1) use only real arithmetic, (2) demonstrate robustness with respect to problem and discretization parameters, and (3) allow for fully stage‐parallel solution. The preconditioners are based on the observation that the lower‐triangular part of the coefficient matrices in the Butcher tableau has larger in magnitude values, compared to the corresponding strictly upper‐triangular part. We analyze the spectrum of the corresponding preconditioned systems and illustrate their performance with numerical experiments. Even though the observation has been made some time ago, its impact on constructing stage‐parallel preconditioners has not yet been done and its systematic study constitutes the novelty of this article.
{"title":"Stage‐parallel preconditioners for implicit Runge–Kutta methods of arbitrarily high order, linear problems","authors":"Owe Axelsson, Ivo Dravins, Maya Neytcheva","doi":"10.1002/nla.2532","DOIUrl":"https://doi.org/10.1002/nla.2532","url":null,"abstract":"Abstract Fully implicit Runge–Kutta methods offer the possibility to use high order accurate time discretization to match space discretization accuracy, an issue of significant importance for many large scale problems of current interest, where we may have fine space resolution with many millions of spatial degrees of freedom and long time intervals. In this work, we consider strongly A‐stable implicit Runge–Kutta methods of arbitrary order of accuracy, based on Radau quadratures. For the arising large algebraic systems we introduce efficient preconditioners, that (1) use only real arithmetic, (2) demonstrate robustness with respect to problem and discretization parameters, and (3) allow for fully stage‐parallel solution. The preconditioners are based on the observation that the lower‐triangular part of the coefficient matrices in the Butcher tableau has larger in magnitude values, compared to the corresponding strictly upper‐triangular part. We analyze the spectrum of the corresponding preconditioned systems and illustrate their performance with numerical experiments. Even though the observation has been made some time ago, its impact on constructing stage‐parallel preconditioners has not yet been done and its systematic study constitutes the novelty of this article.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135063385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}