Lagrange–Vandermonde matrices are the collocation matrices corresponding to Lagrange‐type bases, obtained by removing the denominators from each element of a Lagrange basis. It is proved that, provided the nodes required to create the Lagrange‐type basis and the corresponding collocation matrix are properly ordered, such matrices are strictly totally positive. A fast algorithm to compute the bidiagonal decomposition of these matrices to high relative accuracy is presented. As an application, the problems of eigenvalue computation, linear system solving and inverse computation are solved in an efficient and accurate way for this type of matrices. Moreover, the proposed algorithms allow to solve fastly and to high relative accuracy some of the cited problems when the involved matrices are collocation matrices corresponding to the standard Lagrange basis, although such collocation matrices are not totally positive. Numerical experiments illustrating the good performance of our approach are also included.
{"title":"Accurate bidiagonal decomposition of Lagrange–Vandermonde matrices and applications","authors":"A. Marco, José‐Javier Martínez, Raquel Viaña","doi":"10.1002/nla.2527","DOIUrl":"https://doi.org/10.1002/nla.2527","url":null,"abstract":"Lagrange–Vandermonde matrices are the collocation matrices corresponding to Lagrange‐type bases, obtained by removing the denominators from each element of a Lagrange basis. It is proved that, provided the nodes required to create the Lagrange‐type basis and the corresponding collocation matrix are properly ordered, such matrices are strictly totally positive. A fast algorithm to compute the bidiagonal decomposition of these matrices to high relative accuracy is presented. As an application, the problems of eigenvalue computation, linear system solving and inverse computation are solved in an efficient and accurate way for this type of matrices. Moreover, the proposed algorithms allow to solve fastly and to high relative accuracy some of the cited problems when the involved matrices are collocation matrices corresponding to the standard Lagrange basis, although such collocation matrices are not totally positive. Numerical experiments illustrating the good performance of our approach are also included.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"1 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42782301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Goux, S. Gürol, A. Weaver, Y. Diouane, Oliver Guillet
An important class of nonlinear weighted least‐squares problems arises from the assimilation of observations in atmospheric and ocean models. In variational data assimilation, inverse error covariance matrices define the weighting matrices of the least‐squares problem. For observation errors, a diagonal matrix (i.e., uncorrelated errors) is often assumed for simplicity even when observation errors are suspected to be correlated. While accounting for observation‐error correlations should improve the quality of the solution, it also affects the convergence rate of the minimization algorithms used to iterate to the solution. If the minimization process is stopped before reaching full convergence, which is usually the case in operational applications, the solution may be degraded even if the observation‐error correlations are correctly accounted for. In this article, we explore the influence of the observation‐error correlation matrix () on the convergence rate of a preconditioned conjugate gradient (PCG) algorithm applied to a one‐dimensional variational data assimilation (1D‐Var) problem. We design the idealized 1D‐Var system to include two key features used in more complex systems: we use the background error covariance matrix () as a preconditioner (B‐PCG); and we use a diffusion operator to model spatial correlations in and . Analytical and numerical results with the 1D‐Var system show a strong sensitivity of the convergence rate of B‐PCG to the parameters of the diffusion‐based correlation models. Depending on the parameter choices, correlated observation errors can either speed up or slow down the convergence. In practice, a compromise may be required in the parameter specifications of and between staying close to the best available estimates on the one hand and ensuring an adequate convergence rate of the minimization algorithm on the other.
{"title":"Impact of correlated observation errors on the conditioning of variational data assimilation problems","authors":"O. Goux, S. Gürol, A. Weaver, Y. Diouane, Oliver Guillet","doi":"10.1002/nla.2529","DOIUrl":"https://doi.org/10.1002/nla.2529","url":null,"abstract":"An important class of nonlinear weighted least‐squares problems arises from the assimilation of observations in atmospheric and ocean models. In variational data assimilation, inverse error covariance matrices define the weighting matrices of the least‐squares problem. For observation errors, a diagonal matrix (i.e., uncorrelated errors) is often assumed for simplicity even when observation errors are suspected to be correlated. While accounting for observation‐error correlations should improve the quality of the solution, it also affects the convergence rate of the minimization algorithms used to iterate to the solution. If the minimization process is stopped before reaching full convergence, which is usually the case in operational applications, the solution may be degraded even if the observation‐error correlations are correctly accounted for. In this article, we explore the influence of the observation‐error correlation matrix () on the convergence rate of a preconditioned conjugate gradient (PCG) algorithm applied to a one‐dimensional variational data assimilation (1D‐Var) problem. We design the idealized 1D‐Var system to include two key features used in more complex systems: we use the background error covariance matrix () as a preconditioner (B‐PCG); and we use a diffusion operator to model spatial correlations in and . Analytical and numerical results with the 1D‐Var system show a strong sensitivity of the convergence rate of B‐PCG to the parameters of the diffusion‐based correlation models. Depending on the parameter choices, correlated observation errors can either speed up or slow down the convergence. In practice, a compromise may be required in the parameter specifications of and between staying close to the best available estimates on the one hand and ensuring an adequate convergence rate of the minimization algorithm on the other.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45145878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we consider the rank‐structured approximation of one important type of Cauchy matrix. This approximation plays a key role in some structured matrix methods such as stable and efficient direct solvers and other algorithms for Toeplitz matrices and certain kernel matrices. Previous rank‐structured approximations (specifically hierarchically semiseparable, or HSS, approximations) for such a matrix of size cost at least complexity. Here, we show how to construct an HSS approximation with sublinear (specifically, ) complexity. The main ideas include extensive computation reuse and an analytical far‐field compression strategy. Low‐rank compression at each hierarchical level is restricted to just a single off‐diagonal block row, and a resulting basis matrix is then reused for other off‐diagonal block rows as well as off‐diagonal block columns. The relationships among the off‐diagonal blocks are rigorously analyzed. The far‐field compression uses an analytical proxy point method where we optimize the choice of some parameters so as to obtain accurate low‐rank approximations. Both the basis reuse ideas and the resulting analytical hierarchical compression scheme can be generalized to some other kernel matrices and are useful for accelerating relevant rank‐structured approximations (though not subsequent operations like matrix‐vector multiplications).
{"title":"Rank‐structured approximation of some Cauchy matrices with sublinear complexity","authors":"Mikhail Lepilov, J. Xia","doi":"10.1002/nla.2526","DOIUrl":"https://doi.org/10.1002/nla.2526","url":null,"abstract":"In this article, we consider the rank‐structured approximation of one important type of Cauchy matrix. This approximation plays a key role in some structured matrix methods such as stable and efficient direct solvers and other algorithms for Toeplitz matrices and certain kernel matrices. Previous rank‐structured approximations (specifically hierarchically semiseparable, or HSS, approximations) for such a matrix of size cost at least complexity. Here, we show how to construct an HSS approximation with sublinear (specifically, ) complexity. The main ideas include extensive computation reuse and an analytical far‐field compression strategy. Low‐rank compression at each hierarchical level is restricted to just a single off‐diagonal block row, and a resulting basis matrix is then reused for other off‐diagonal block rows as well as off‐diagonal block columns. The relationships among the off‐diagonal blocks are rigorously analyzed. The far‐field compression uses an analytical proxy point method where we optimize the choice of some parameters so as to obtain accurate low‐rank approximations. Both the basis reuse ideas and the resulting analytical hierarchical compression scheme can be generalized to some other kernel matrices and are useful for accelerating relevant rank‐structured approximations (though not subsequent operations like matrix‐vector multiplications).","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42911576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides a fast algorithm for the search of a dominant (locally maximum volume) submatrix, generalizing the existing algorithms from n⩽r$$ nleqslant r $$ to n>r$$ n>r $$ submatrix columns, where r$$ r $$ is the number of searched rows. We prove the bound on the number of steps of the algorithm, which allows it to outperform the existing subset selection algorithms in either the bounds on the norm of the pseudoinverse of the found submatrix, or the bounds on the complexity, or both.
{"title":"Volume‐based subset selection","authors":"Alexander Osinsky","doi":"10.1002/nla.2525","DOIUrl":"https://doi.org/10.1002/nla.2525","url":null,"abstract":"This paper provides a fast algorithm for the search of a dominant (locally maximum volume) submatrix, generalizing the existing algorithms from n⩽r$$ nleqslant r $$ to n>r$$ n>r $$ submatrix columns, where r$$ r $$ is the number of searched rows. We prove the bound on the number of steps of the algorithm, which allows it to outperform the existing subset selection algorithms in either the bounds on the norm of the pseudoinverse of the found submatrix, or the bounds on the complexity, or both.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46946721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, it is proved that Gram matrices of totally positive bases of the space of polynomials of a given degree on a compact interval are totally positive. Conditions to guarantee computations to high relative accuracy with those matrices are also obtained. Furthermore, a fast and accurate algorithm to compute the bidiagonal factorization of Gram matrices of the Said‐Ball bases is obtained and used to compute to high relative accuracy their singular values and inverses, as well as the solution of some linear systems associated with these matrices. Numerical examples are included.
{"title":"Total positivity and accurate computations with Gram matrices of Said‐Ball bases","authors":"E. Mainar, J. M. Pena, B. Rubio","doi":"10.1002/nla.2521","DOIUrl":"https://doi.org/10.1002/nla.2521","url":null,"abstract":"In this article, it is proved that Gram matrices of totally positive bases of the space of polynomials of a given degree on a compact interval are totally positive. Conditions to guarantee computations to high relative accuracy with those matrices are also obtained. Furthermore, a fast and accurate algorithm to compute the bidiagonal factorization of Gram matrices of the Said‐Ball bases is obtained and used to compute to high relative accuracy their singular values and inverses, as well as the solution of some linear systems associated with these matrices. Numerical examples are included.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47051257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In computational practice, most attention is paid to rational approximations of functions and approximations by the sum of exponents. We consider a wide enough class of nonlinear approximations characterized by a set of two required parameters. The approximating function is linear in the first parameter; these parameters are assumed to be positive. The individual terms of the approximating function represent a fixed function that depends nonlinearly on the second parameter. A numerical approximation minimizes the residual functional by approximating function values at individual points. The second parameter's value is set on a more extensive set of points of the interval of permissible values. The proposed approach's key feature consists in determining the first parameter on each separate iteration of the classical nonnegative least squares method. The computational algorithm is used to rational approximate the function