首页 > 最新文献

Numerical Linear Algebra with Applications最新文献

英文 中文
Accurate bidiagonal decomposition of Lagrange–Vandermonde matrices and applications 拉格朗日-范德蒙德矩阵的精确双对角分解及其应用
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-08-12 DOI: 10.1002/nla.2527
A. Marco, José‐Javier Martínez, Raquel Viaña
Lagrange–Vandermonde matrices are the collocation matrices corresponding to Lagrange‐type bases, obtained by removing the denominators from each element of a Lagrange basis. It is proved that, provided the nodes required to create the Lagrange‐type basis and the corresponding collocation matrix are properly ordered, such matrices are strictly totally positive. A fast algorithm to compute the bidiagonal decomposition of these matrices to high relative accuracy is presented. As an application, the problems of eigenvalue computation, linear system solving and inverse computation are solved in an efficient and accurate way for this type of matrices. Moreover, the proposed algorithms allow to solve fastly and to high relative accuracy some of the cited problems when the involved matrices are collocation matrices corresponding to the standard Lagrange basis, although such collocation matrices are not totally positive. Numerical experiments illustrating the good performance of our approach are also included.
拉格朗日-范德蒙德矩阵是与拉格朗日型基对应的搭配矩阵,通过去除拉格朗日基的每个元素的分母得到。证明了如果创建拉格朗日型基所需的节点和相应的搭配矩阵是适当有序的,则该矩阵是严格全正的。本文提出了一种计算这些矩阵的双对角分解的快速算法,使其具有较高的相对精度。作为一种应用,对这类矩阵的特征值计算、线性系统求解和逆计算等问题进行了有效而准确的求解。此外,当所涉及的矩阵是与标准拉格朗日基对应的搭配矩阵时,所提出的算法可以快速且相对精度高地解决一些被引用的问题,尽管这种搭配矩阵并不完全是正的。数值实验也证明了该方法的良好性能。
{"title":"Accurate bidiagonal decomposition of Lagrange–Vandermonde matrices and applications","authors":"A. Marco, José‐Javier Martínez, Raquel Viaña","doi":"10.1002/nla.2527","DOIUrl":"https://doi.org/10.1002/nla.2527","url":null,"abstract":"Lagrange–Vandermonde matrices are the collocation matrices corresponding to Lagrange‐type bases, obtained by removing the denominators from each element of a Lagrange basis. It is proved that, provided the nodes required to create the Lagrange‐type basis and the corresponding collocation matrix are properly ordered, such matrices are strictly totally positive. A fast algorithm to compute the bidiagonal decomposition of these matrices to high relative accuracy is presented. As an application, the problems of eigenvalue computation, linear system solving and inverse computation are solved in an efficient and accurate way for this type of matrices. Moreover, the proposed algorithms allow to solve fastly and to high relative accuracy some of the cited problems when the involved matrices are collocation matrices corresponding to the standard Lagrange basis, although such collocation matrices are not totally positive. Numerical experiments illustrating the good performance of our approach are also included.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"1 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42782301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Impact of correlated observation errors on the conditioning of variational data assimilation problems 相关观测误差对变分同化问题条件的影响
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-08-09 DOI: 10.1002/nla.2529
O. Goux, S. Gürol, A. Weaver, Y. Diouane, Oliver Guillet
An important class of nonlinear weighted least‐squares problems arises from the assimilation of observations in atmospheric and ocean models. In variational data assimilation, inverse error covariance matrices define the weighting matrices of the least‐squares problem. For observation errors, a diagonal matrix (i.e., uncorrelated errors) is often assumed for simplicity even when observation errors are suspected to be correlated. While accounting for observation‐error correlations should improve the quality of the solution, it also affects the convergence rate of the minimization algorithms used to iterate to the solution. If the minimization process is stopped before reaching full convergence, which is usually the case in operational applications, the solution may be degraded even if the observation‐error correlations are correctly accounted for. In this article, we explore the influence of the observation‐error correlation matrix () on the convergence rate of a preconditioned conjugate gradient (PCG) algorithm applied to a one‐dimensional variational data assimilation (1D‐Var) problem. We design the idealized 1D‐Var system to include two key features used in more complex systems: we use the background error covariance matrix () as a preconditioner (B‐PCG); and we use a diffusion operator to model spatial correlations in and . Analytical and numerical results with the 1D‐Var system show a strong sensitivity of the convergence rate of B‐PCG to the parameters of the diffusion‐based correlation models. Depending on the parameter choices, correlated observation errors can either speed up or slow down the convergence. In practice, a compromise may be required in the parameter specifications of and between staying close to the best available estimates on the one hand and ensuring an adequate convergence rate of the minimization algorithm on the other.
一类重要的非线性加权最小二乘问题产生于大气和海洋模式观测的同化。在变分数据同化中,误差逆协方差矩阵定义了最小二乘问题的权重矩阵。对于观测误差,通常假设一个对角矩阵(即,不相关的误差),即使观测误差被怀疑是相关的。虽然考虑观测误差相关性可以提高解的质量,但它也会影响用于迭代解的最小化算法的收敛速度。如果最小化过程在达到完全收敛之前停止,这通常是在操作应用中出现的情况,即使正确地解释了观测误差相关性,解决方案也可能会降级。在本文中,我们探讨了观测误差相关矩阵()对用于一维变分数据同化(1D - Var)问题的预条件共轭梯度(PCG)算法收敛速度的影响。我们设计了理想的1D - Var系统,以包括在更复杂的系统中使用的两个关键特征:我们使用背景误差协方差矩阵()作为前置条件(B - PCG);我们使用扩散算子来模拟和中的空间相关性。1D - Var系统的分析和数值结果表明,B - PCG的收敛速率对基于扩散的相关模型的参数具有很强的敏感性。根据参数的选择,相关的观测误差可以加快或减慢收敛速度。在实践中,可能需要在参数规范和一方面保持接近最佳可用估计和另一方面确保最小化算法的适当收敛率之间做出妥协。
{"title":"Impact of correlated observation errors on the conditioning of variational data assimilation problems","authors":"O. Goux, S. Gürol, A. Weaver, Y. Diouane, Oliver Guillet","doi":"10.1002/nla.2529","DOIUrl":"https://doi.org/10.1002/nla.2529","url":null,"abstract":"An important class of nonlinear weighted least‐squares problems arises from the assimilation of observations in atmospheric and ocean models. In variational data assimilation, inverse error covariance matrices define the weighting matrices of the least‐squares problem. For observation errors, a diagonal matrix (i.e., uncorrelated errors) is often assumed for simplicity even when observation errors are suspected to be correlated. While accounting for observation‐error correlations should improve the quality of the solution, it also affects the convergence rate of the minimization algorithms used to iterate to the solution. If the minimization process is stopped before reaching full convergence, which is usually the case in operational applications, the solution may be degraded even if the observation‐error correlations are correctly accounted for. In this article, we explore the influence of the observation‐error correlation matrix () on the convergence rate of a preconditioned conjugate gradient (PCG) algorithm applied to a one‐dimensional variational data assimilation (1D‐Var) problem. We design the idealized 1D‐Var system to include two key features used in more complex systems: we use the background error covariance matrix () as a preconditioner (B‐PCG); and we use a diffusion operator to model spatial correlations in and . Analytical and numerical results with the 1D‐Var system show a strong sensitivity of the convergence rate of B‐PCG to the parameters of the diffusion‐based correlation models. Depending on the parameter choices, correlated observation errors can either speed up or slow down the convergence. In practice, a compromise may be required in the parameter specifications of and between staying close to the best available estimates on the one hand and ensuring an adequate convergence rate of the minimization algorithm on the other.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45145878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rank‐structured approximation of some Cauchy matrices with sublinear complexity 一类具有次线性复杂度的柯西矩阵的秩结构逼近
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-08-07 DOI: 10.1002/nla.2526
Mikhail Lepilov, J. Xia
In this article, we consider the rank‐structured approximation of one important type of Cauchy matrix. This approximation plays a key role in some structured matrix methods such as stable and efficient direct solvers and other algorithms for Toeplitz matrices and certain kernel matrices. Previous rank‐structured approximations (specifically hierarchically semiseparable, or HSS, approximations) for such a matrix of size cost at least complexity. Here, we show how to construct an HSS approximation with sublinear (specifically, ) complexity. The main ideas include extensive computation reuse and an analytical far‐field compression strategy. Low‐rank compression at each hierarchical level is restricted to just a single off‐diagonal block row, and a resulting basis matrix is then reused for other off‐diagonal block rows as well as off‐diagonal block columns. The relationships among the off‐diagonal blocks are rigorously analyzed. The far‐field compression uses an analytical proxy point method where we optimize the choice of some parameters so as to obtain accurate low‐rank approximations. Both the basis reuse ideas and the resulting analytical hierarchical compression scheme can be generalized to some other kernel matrices and are useful for accelerating relevant rank‐structured approximations (though not subsequent operations like matrix‐vector multiplications).
在本文中,我们考虑一类重要的柯西矩阵的秩结构近似。这种近似在Toeplitz矩阵和某些核矩阵的稳定和高效直接求解等结构化矩阵方法和其他算法中起着关键作用。对于这样一个大小的矩阵,先前的秩结构近似(特别是层次半可分割,或HSS近似)花费的复杂性最少。在这里,我们将展示如何构建具有次线性(特别是)复杂性的HSS近似。主要思想包括广泛的计算重用和分析远场压缩策略。每个层级的低秩压缩仅限于单个非对角线块行,然后产生的基矩阵可用于其他非对角线块行以及非对角线块列。严格分析了非对角线块之间的关系。远场压缩使用解析代理点方法,其中我们优化了一些参数的选择,以获得准确的低秩近似。基重用思想和由此产生的分析层次压缩方案都可以推广到一些其他核矩阵,并有助于加速相关的秩结构近似(尽管不是后续的操作,如矩阵向量乘法)。
{"title":"Rank‐structured approximation of some Cauchy matrices with sublinear complexity","authors":"Mikhail Lepilov, J. Xia","doi":"10.1002/nla.2526","DOIUrl":"https://doi.org/10.1002/nla.2526","url":null,"abstract":"In this article, we consider the rank‐structured approximation of one important type of Cauchy matrix. This approximation plays a key role in some structured matrix methods such as stable and efficient direct solvers and other algorithms for Toeplitz matrices and certain kernel matrices. Previous rank‐structured approximations (specifically hierarchically semiseparable, or HSS, approximations) for such a matrix of size cost at least complexity. Here, we show how to construct an HSS approximation with sublinear (specifically, ) complexity. The main ideas include extensive computation reuse and an analytical far‐field compression strategy. Low‐rank compression at each hierarchical level is restricted to just a single off‐diagonal block row, and a resulting basis matrix is then reused for other off‐diagonal block rows as well as off‐diagonal block columns. The relationships among the off‐diagonal blocks are rigorously analyzed. The far‐field compression uses an analytical proxy point method where we optimize the choice of some parameters so as to obtain accurate low‐rank approximations. Both the basis reuse ideas and the resulting analytical hierarchical compression scheme can be generalized to some other kernel matrices and are useful for accelerating relevant rank‐structured approximations (though not subsequent operations like matrix‐vector multiplications).","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42911576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Volume‐based subset selection 基于体积的子集选择
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-07-31 DOI: 10.1002/nla.2525
Alexander Osinsky
This paper provides a fast algorithm for the search of a dominant (locally maximum volume) submatrix, generalizing the existing algorithms from n⩽r$$ nleqslant r $$ to n>r$$ n>r $$ submatrix columns, where r$$ r $$ is the number of searched rows. We prove the bound on the number of steps of the algorithm, which allows it to outperform the existing subset selection algorithms in either the bounds on the norm of the pseudoinverse of the found submatrix, or the bounds on the complexity, or both.
{"title":"Volume‐based subset selection","authors":"Alexander Osinsky","doi":"10.1002/nla.2525","DOIUrl":"https://doi.org/10.1002/nla.2525","url":null,"abstract":"This paper provides a fast algorithm for the search of a dominant (locally maximum volume) submatrix, generalizing the existing algorithms from n⩽r$$ nleqslant r $$ to n>r$$ n>r $$ submatrix columns, where r$$ r $$ is the number of searched rows. We prove the bound on the number of steps of the algorithm, which allows it to outperform the existing subset selection algorithms in either the bounds on the norm of the pseudoinverse of the found submatrix, or the bounds on the complexity, or both.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46946721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Total positivity and accurate computations with Gram matrices of Said‐Ball bases Said-Ball基的Gram矩阵的全正性和精确计算
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-07-13 DOI: 10.1002/nla.2521
E. Mainar, J. M. Pena, B. Rubio
In this article, it is proved that Gram matrices of totally positive bases of the space of polynomials of a given degree on a compact interval are totally positive. Conditions to guarantee computations to high relative accuracy with those matrices are also obtained. Furthermore, a fast and accurate algorithm to compute the bidiagonal factorization of Gram matrices of the Said‐Ball bases is obtained and used to compute to high relative accuracy their singular values and inverses, as well as the solution of some linear systems associated with these matrices. Numerical examples are included.
{"title":"Total positivity and accurate computations with Gram matrices of Said‐Ball bases","authors":"E. Mainar, J. M. Pena, B. Rubio","doi":"10.1002/nla.2521","DOIUrl":"https://doi.org/10.1002/nla.2521","url":null,"abstract":"In this article, it is proved that Gram matrices of totally positive bases of the space of polynomials of a given degree on a compact interval are totally positive. Conditions to guarantee computations to high relative accuracy with those matrices are also obtained. Furthermore, a fast and accurate algorithm to compute the bidiagonal factorization of Gram matrices of the Said‐Ball bases is obtained and used to compute to high relative accuracy their singular values and inverses, as well as the solution of some linear systems associated with these matrices. Numerical examples are included.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47051257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonlinear approximation of functions based on nonnegative least squares solver 基于非负最小二乘求解器的函数非线性逼近
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-07-10 DOI: 10.1002/nla.2522
Petr N. Vabishchevich
In computational practice, most attention is paid to rational approximations of functions and approximations by the sum of exponents. We consider a wide enough class of nonlinear approximations characterized by a set of two required parameters. The approximating function is linear in the first parameter; these parameters are assumed to be positive. The individual terms of the approximating function represent a fixed function that depends nonlinearly on the second parameter. A numerical approximation minimizes the residual functional by approximating function values at individual points. The second parameter's value is set on a more extensive set of points of the interval of permissible values. The proposed approach's key feature consists in determining the first parameter on each separate iteration of the classical nonnegative least squares method. The computational algorithm is used to rational approximate the function xα,0<α<1,x1�$$ {x}^{-alpha },kern0.3em 0<alpha <1,kern0.3em xge 1 $$�. The second example concerns the approximation of the stretching exponential function exp(xα),0<α<1�$$ exp left(-{x}^{alpha}right),0<alpha <1 $$� at x0�$$ xge 0 $$� by the sum of exponents.
在计算实践中,最关注的是函数的有理逼近和指数和逼近。我们考虑一类足够广泛的非线性近似,其特征为两个必需参数的集合。第一个参数的近似函数是线性的;假设这些参数是正的。近似函数的各个项表示一个非线性依赖于第二个参数的固定函数。数值逼近通过逼近单个点上的函数值来最小化残差泛函。第二个参数的值设置在允许值区间的更广泛的点集上。该方法的主要特点是在经典非负最小二乘法的每次单独迭代中确定第一个参数。利用计算算法对函数x−α,0&lt;α&lt;1,x≥1 $$ {x}^{-alpha },kern0.3em 0<alpha <1,kern0.3em xge 1 $$进行有理逼近。第二个例子涉及到在x≥0 $$ xge 0 $$处,通过指数和逼近拉伸指数函数exp(- xα),0&lt;α&lt;1 $$ exp left(-{x}^{alpha}right),0<alpha <1 $$。
{"title":"Nonlinear approximation of functions based on nonnegative least squares solver","authors":"Petr N. Vabishchevich","doi":"10.1002/nla.2522","DOIUrl":"https://doi.org/10.1002/nla.2522","url":null,"abstract":"In computational practice, most attention is paid to rational approximations of functions and approximations by the sum of exponents. We consider a wide enough class of nonlinear approximations characterized by a set of two required parameters. The approximating function is linear in the first parameter; these parameters are assumed to be positive. The individual terms of the approximating function represent a fixed function that depends nonlinearly on the second parameter. A numerical approximation minimizes the residual functional by approximating function values at individual points. The second parameter's value is set on a more extensive set of points of the interval of permissible values. The proposed approach's key feature consists in determining the first parameter on each separate iteration of the classical nonnegative least squares method. The computational algorithm is used to rational approximate the function <math altimg=\"urn:x-wiley:nla:media:nla2522:nla2522-math-0001\" display=\"inline\" location=\"graphic/nla2522-math-0001.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<msup>\u0000<mrow>\u0000<mi>x</mi>\u0000</mrow>\u0000<mrow>\u0000<mo form=\"prefix\">−</mo>\u0000<mi>α</mi>\u0000</mrow>\u0000</msup>\u0000<mo>,</mo>\u0000<mspace width=\"0.3em\"></mspace>\u0000<mn>0</mn>\u0000<mo>&lt;</mo>\u0000<mi>α</mi>\u0000<mo>&lt;</mo>\u0000<mn>1</mn>\u0000<mo>,</mo>\u0000<mspace width=\"0.3em\"></mspace>\u0000<mi>x</mi>\u0000<mo>≥</mo>\u0000<mn>1</mn>\u0000</mrow>\u0000$$ {x}^{-alpha },kern0.3em 0&lt;alpha &lt;1,kern0.3em xge 1 $$</annotation>\u0000</semantics></math>. The second example concerns the approximation of the stretching exponential function <math altimg=\"urn:x-wiley:nla:media:nla2522:nla2522-math-0002\" display=\"inline\" location=\"graphic/nla2522-math-0002.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>exp</mi>\u0000<mo stretchy=\"false\">(</mo>\u0000<mo form=\"prefix\">−</mo>\u0000<msup>\u0000<mrow>\u0000<mi>x</mi>\u0000</mrow>\u0000<mrow>\u0000<mi>α</mi>\u0000</mrow>\u0000</msup>\u0000<mo stretchy=\"false\">)</mo>\u0000<mo>,</mo>\u0000<mspace width=\"0.0em\"></mspace>\u0000<mspace width=\"0.0em\"></mspace>\u0000<mspace width=\"0.2em\"></mspace>\u0000<mn>0</mn>\u0000<mo>&lt;</mo>\u0000<mi>α</mi>\u0000<mo>&lt;</mo>\u0000<mn>1</mn>\u0000</mrow>\u0000$$ exp left(-{x}^{alpha}right),0&lt;alpha &lt;1 $$</annotation>\u0000</semantics></math> at <math altimg=\"urn:x-wiley:nla:media:nla2522:nla2522-math-0003\" display=\"inline\" location=\"graphic/nla2522-math-0003.png\" overflow=\"scroll\">\u0000<semantics>\u0000<mrow>\u0000<mi>x</mi>\u0000<mo>≥</mo>\u0000<mn>0</mn>\u0000</mrow>\u0000$$ xge 0 $$</annotation>\u0000</semantics></math> by the sum of exponents.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"147 3","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138502938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven linear complexity low-rank approximation of general kernel matrices: A geometric approach 一般核矩阵的数据驱动线性复杂度低秩逼近:一种几何方法
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-07-04 DOI: 10.1002/nla.2519
Difeng Cai, Edmond Chow, Yuanzhe Xi
A general, <i>rectangular</i> kernel matrix may be defined as <math altimg="urn:x-wiley:nla:media:nla2519:nla2519-math-0001" display="inline" location="graphic/nla2519-math-0001.png" overflow="scroll"><semantics><mrow><msub><mrow><mi>K</mi></mrow><mrow><mi>i</mi><mi>j</mi></mrow></msub><mo>=</mo><mi>κ</mi><mo stretchy="false">(</mo><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>,</mo><msub><mrow><mi>y</mi></mrow><mrow><mi>j</mi></mrow></msub><mo stretchy="false">)</mo></mrow>$$ {K}_{ij}=kappa left({x}_i,{y}_jright) $$</annotation></semantics></math> where <math altimg="urn:x-wiley:nla:media:nla2519:nla2519-math-0002" display="inline" location="graphic/nla2519-math-0002.png" overflow="scroll"><semantics><mrow><mi>κ</mi><mo stretchy="false">(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo stretchy="false">)</mo></mrow>$$ kappa left(x,yright) $$</annotation></semantics></math> is a kernel function and where <math altimg="urn:x-wiley:nla:media:nla2519:nla2519-math-0003" display="inline" location="graphic/nla2519-math-0003.png" overflow="scroll"><semantics><mrow><mi>X</mi><mo>=</mo><msubsup><mrow><mo stretchy="false">{</mo><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub><mo stretchy="false">}</mo></mrow><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>m</mi></mrow></msubsup></mrow>$$ X={left{{x}_iright}}_{i=1}^m $$</annotation></semantics></math> and <math altimg="urn:x-wiley:nla:media:nla2519:nla2519-math-0004" display="inline" location="graphic/nla2519-math-0004.png" overflow="scroll"><semantics><mrow><mi>Y</mi><mo>=</mo><msubsup><mrow><mo stretchy="false">{</mo><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo stretchy="false">}</mo></mrow><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>n</mi></mrow></msubsup></mrow>$$ Y={left{{y}_iright}}_{i=1}^n $$</annotation></semantics></math> are two sets of points. In this paper, we seek a low-rank approximation to a kernel matrix where the sets of points <math altimg="urn:x-wiley:nla:media:nla2519:nla2519-math-0005" display="inline" location="graphic/nla2519-math-0005.png" overflow="scroll"><semantics><mrow><mi>X</mi></mrow>$$ X $$</annotation></semantics></math> and <math altimg="urn:x-wiley:nla:media:nla2519:nla2519-math-0006" display="inline" location="graphic/nla2519-math-0006.png" overflow="scroll"><semantics><mrow><mi>Y</mi></mrow>$$ Y $$</annotation></semantics></math> are large and are arbitrarily distributed, such as away from each other, “intermingled”, identical, and so forth. Such rectangular kernel matrices may arise, for example, in Gaussian process regression where <math altimg="urn:x-wiley:nla:media:nla2519:nla2519-math-0007" display="inline" location="graphic/nla2519-math-0007.png" overflow="scroll"><semantics><mrow><mi>X</mi></mrow>$$ X $$</annotation></semantics></math> corresponds to the training data and <math altimg="urn:x-wil
一般的矩形核矩阵可以定义为Kij=κ(xi,yj) $$ {K}_{ij}=kappa left({x}_i,{y}_jright) $$,其中κ(x,y) $$ kappa left(x,yright) $$是一个核函数,其中x ={xii}=1m $$ X={left{{x}_iright}}_{i=1}^m $$和y ={yii}=1n $$ Y={left{{y}_iright}}_{i=1}^n $$是两组点。在本文中,我们寻求一个核矩阵的低秩近似,其中点X $$ X $$和Y $$ Y $$的集合很大并且是任意分布的,例如彼此远离,“混合”,相同,等等。例如,在高斯过程回归中可能会出现这样的矩形核矩阵,其中X $$ X $$对应训练数据,Y $$ Y $$对应测试数据。在这种情况下,点通常是高维的。由于点集很大,我们必须利用矩阵由核函数产生的事实,避免形成矩阵,从而排除了大多数代数技术。特别是,我们寻求的方法,可以线性或接近线性缩放相对于固定的近似秩的数据的大小。本文的主要思想是几何地选择适当的点子集来构造低秩逼近。本文的分析指导了如何进行这种选择。
{"title":"Data-driven linear complexity low-rank approximation of general kernel matrices: A geometric approach","authors":"Difeng Cai, Edmond Chow, Yuanzhe Xi","doi":"10.1002/nla.2519","DOIUrl":"https://doi.org/10.1002/nla.2519","url":null,"abstract":"A general, &lt;i&gt;rectangular&lt;/i&gt; kernel matrix may be defined as &lt;math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0001\" display=\"inline\" location=\"graphic/nla2519-math-0001.png\" overflow=\"scroll\"&gt;\u0000&lt;semantics&gt;\u0000&lt;mrow&gt;\u0000&lt;msub&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;K&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;i&lt;/mi&gt;\u0000&lt;mi&gt;j&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;/msub&gt;\u0000&lt;mo&gt;=&lt;/mo&gt;\u0000&lt;mi&gt;κ&lt;/mi&gt;\u0000&lt;mo stretchy=\"false\"&gt;(&lt;/mo&gt;\u0000&lt;msub&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;x&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;i&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;/msub&gt;\u0000&lt;mo&gt;,&lt;/mo&gt;\u0000&lt;msub&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;y&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;j&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;/msub&gt;\u0000&lt;mo stretchy=\"false\"&gt;)&lt;/mo&gt;\u0000&lt;/mrow&gt;\u0000$$ {K}_{ij}=kappa left({x}_i,{y}_jright) $$&lt;/annotation&gt;\u0000&lt;/semantics&gt;&lt;/math&gt; where &lt;math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0002\" display=\"inline\" location=\"graphic/nla2519-math-0002.png\" overflow=\"scroll\"&gt;\u0000&lt;semantics&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;κ&lt;/mi&gt;\u0000&lt;mo stretchy=\"false\"&gt;(&lt;/mo&gt;\u0000&lt;mi&gt;x&lt;/mi&gt;\u0000&lt;mo&gt;,&lt;/mo&gt;\u0000&lt;mi&gt;y&lt;/mi&gt;\u0000&lt;mo stretchy=\"false\"&gt;)&lt;/mo&gt;\u0000&lt;/mrow&gt;\u0000$$ kappa left(x,yright) $$&lt;/annotation&gt;\u0000&lt;/semantics&gt;&lt;/math&gt; is a kernel function and where &lt;math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0003\" display=\"inline\" location=\"graphic/nla2519-math-0003.png\" overflow=\"scroll\"&gt;\u0000&lt;semantics&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;X&lt;/mi&gt;\u0000&lt;mo&gt;=&lt;/mo&gt;\u0000&lt;msubsup&gt;\u0000&lt;mrow&gt;\u0000&lt;mo stretchy=\"false\"&gt;{&lt;/mo&gt;\u0000&lt;msub&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;x&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;i&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;/msub&gt;\u0000&lt;mo stretchy=\"false\"&gt;}&lt;/mo&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;i&lt;/mi&gt;\u0000&lt;mo&gt;=&lt;/mo&gt;\u0000&lt;mn&gt;1&lt;/mn&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;m&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;/msubsup&gt;\u0000&lt;/mrow&gt;\u0000$$ X={left{{x}_iright}}_{i=1}^m $$&lt;/annotation&gt;\u0000&lt;/semantics&gt;&lt;/math&gt; and &lt;math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0004\" display=\"inline\" location=\"graphic/nla2519-math-0004.png\" overflow=\"scroll\"&gt;\u0000&lt;semantics&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;Y&lt;/mi&gt;\u0000&lt;mo&gt;=&lt;/mo&gt;\u0000&lt;msubsup&gt;\u0000&lt;mrow&gt;\u0000&lt;mo stretchy=\"false\"&gt;{&lt;/mo&gt;\u0000&lt;msub&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;y&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;i&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;/msub&gt;\u0000&lt;mo stretchy=\"false\"&gt;}&lt;/mo&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;i&lt;/mi&gt;\u0000&lt;mo&gt;=&lt;/mo&gt;\u0000&lt;mn&gt;1&lt;/mn&gt;\u0000&lt;/mrow&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;n&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000&lt;/msubsup&gt;\u0000&lt;/mrow&gt;\u0000$$ Y={left{{y}_iright}}_{i=1}^n $$&lt;/annotation&gt;\u0000&lt;/semantics&gt;&lt;/math&gt; are two sets of points. In this paper, we seek a low-rank approximation to a kernel matrix where the sets of points &lt;math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0005\" display=\"inline\" location=\"graphic/nla2519-math-0005.png\" overflow=\"scroll\"&gt;\u0000&lt;semantics&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;X&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000$$ X $$&lt;/annotation&gt;\u0000&lt;/semantics&gt;&lt;/math&gt; and &lt;math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0006\" display=\"inline\" location=\"graphic/nla2519-math-0006.png\" overflow=\"scroll\"&gt;\u0000&lt;semantics&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;Y&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000$$ Y $$&lt;/annotation&gt;\u0000&lt;/semantics&gt;&lt;/math&gt; are large and are arbitrarily distributed, such as away from each other, “intermingled”, identical, and so forth. Such rectangular kernel matrices may arise, for example, in Gaussian process regression where &lt;math altimg=\"urn:x-wiley:nla:media:nla2519:nla2519-math-0007\" display=\"inline\" location=\"graphic/nla2519-math-0007.png\" overflow=\"scroll\"&gt;\u0000&lt;semantics&gt;\u0000&lt;mrow&gt;\u0000&lt;mi&gt;X&lt;/mi&gt;\u0000&lt;/mrow&gt;\u0000$$ X $$&lt;/annotation&gt;\u0000&lt;/semantics&gt;&lt;/math&gt; corresponds to the training data and &lt;math altimg=\"urn:x-wil","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"148 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138502936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Issue Information 问题信息
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-07-02 DOI: 10.1002/nla.2452
{"title":"Issue Information","authors":"","doi":"10.1002/nla.2452","DOIUrl":"https://doi.org/10.1002/nla.2452","url":null,"abstract":"","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44500622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockwise acceleration of alternating least squares for canonical tensor decomposition 正则张量分解中交替最小二乘的块方向加速
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-06-21 DOI: 10.1002/nla.2516
D. Evans, Nan Ye
The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well‐known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum‐based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state‐of‐the‐art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state‐of‐the‐art acceleration techniques on both simulated and real tensors.
{"title":"Blockwise acceleration of alternating least squares for canonical tensor decomposition","authors":"D. Evans, Nan Ye","doi":"10.1002/nla.2516","DOIUrl":"https://doi.org/10.1002/nla.2516","url":null,"abstract":"The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well‐known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum‐based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state‐of‐the‐art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state‐of‐the‐art acceleration techniques on both simulated and real tensors.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49113853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The generalized residual cutting method and its convergence characteristics 广义残差切割方法及其收敛性
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-06-20 DOI: 10.1002/nla.2517
T. Abe, Anthony T. Chronopoulos
Iterative methods and especially Krylov subspace methods (KSM) are a very useful numerical tool in solving for large and sparse linear systems problems arising in science and engineering modeling. More recently, the nested loop KSM have been proposed that improve the convergence of the traditional KSM. In this article, we review the residual cutting (RC) and the generalized residual cutting (GRC) that are nested loop methods for large and sparse linear systems problems. We also show that GRC is a KSM that is equivalent to Orthomin with a variable preconditioning. We use the modified Gram–Schmidt method to derive a stable GRC algorithm. We show that GRC presents a general framework for constructing a class of “hybrid” (nested) KSM based on inner loop method selection. We conduct numerical experiments using nonsymmetric indefinite matrices from a widely used library of sparse matrices that validate the efficiency and the robustness of the proposed methods.
{"title":"The generalized residual cutting method and its convergence characteristics","authors":"T. Abe, Anthony T. Chronopoulos","doi":"10.1002/nla.2517","DOIUrl":"https://doi.org/10.1002/nla.2517","url":null,"abstract":"Iterative methods and especially Krylov subspace methods (KSM) are a very useful numerical tool in solving for large and sparse linear systems problems arising in science and engineering modeling. More recently, the nested loop KSM have been proposed that improve the convergence of the traditional KSM. In this article, we review the residual cutting (RC) and the generalized residual cutting (GRC) that are nested loop methods for large and sparse linear systems problems. We also show that GRC is a KSM that is equivalent to Orthomin with a variable preconditioning. We use the modified Gram–Schmidt method to derive a stable GRC algorithm. We show that GRC presents a general framework for constructing a class of “hybrid” (nested) KSM based on inner loop method selection. We conduct numerical experiments using nonsymmetric indefinite matrices from a widely used library of sparse matrices that validate the efficiency and the robustness of the proposed methods.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46177245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Numerical Linear Algebra with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1