首页 > 最新文献

Numerical Linear Algebra with Applications最新文献

英文 中文
A two‐step matrix splitting iteration paradigm based on one single splitting for solving systems of linear equations 求解线性方程组的基于一次分裂的两步矩阵分裂迭代范式
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-06-11 DOI: 10.1002/nla.2510
Z. Bai
{"title":"A two‐step matrix splitting iteration paradigm based on one single splitting for solving systems of linear equations","authors":"Z. Bai","doi":"10.1002/nla.2510","DOIUrl":"https://doi.org/10.1002/nla.2510","url":null,"abstract":"","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43001208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Vanka‐based parameter‐robust multigrid relaxation for the Stokes–Darcy Brinkman problems Stokes-Darcy Brinkman问题的基于Vanka的参数鲁棒多网格松弛
3区 数学 Q1 MATHEMATICS Pub Date : 2023-06-05 DOI: 10.1002/nla.2514
Yunhui He
Abstract We consider a block‐structured multigrid method based on Braess–Sarazin relaxation for solving the Stokes–Darcy Brinkman equations discretized by the marker and cell scheme. In the relaxation scheme, an element‐based additive Vanka operator is used to approximate the inverse of the corresponding shifted Laplacian operator involved in the discrete Stokes–Darcy Brinkman system. Using local Fourier analysis, we present the stencil for the additive Vanka smoother and derive an optimal smoothing factor for Vanka‐based Braess–Sarazin relaxation for the Stokes–Darcy Brinkman equations. Although the optimal damping parameter is dependent on meshsize and physical parameter, it is very close to one. In practice, we find that using three sweeps of Jacobi relaxation on the Schur complement system is sufficient. Numerical results of two‐grid and V(1,1)‐cycle are presented, which show high efficiency of the proposed relaxation scheme and its robustness to physical parameters and the meshsize. Using a damping parameter equal to one gives almost the same convergence results as these for the optimal damping parameter.
摘要:我们考虑了一种基于Braess-Sarazin松弛的块结构多重网格方法,用于求解由标记和单元格式离散的Stokes-Darcy Brinkman方程。在松弛方案中,使用基于元素的加性Vanka算子来近似离散Stokes-Darcy Brinkman系统中相应移位拉普拉斯算子的逆。利用局部傅里叶分析,我们给出了Vanka光滑的模板,并推导了Stokes-Darcy Brinkman方程的基于Vanka的braress - sarazin松弛的最佳平滑因子。虽然最优阻尼参数依赖于网格尺寸和物理参数,但它非常接近于1。在实践中,我们发现在Schur补系统上使用三遍Jacobi松弛就足够了。两网格和V(1,1)循环的数值结果表明,所提出的松弛方案效率高,对物理参数和网格尺寸具有鲁棒性。使用等于1的阻尼参数得到的收敛结果与最优阻尼参数的收敛结果几乎相同。
{"title":"A Vanka‐based parameter‐robust multigrid relaxation for the Stokes–Darcy Brinkman problems","authors":"Yunhui He","doi":"10.1002/nla.2514","DOIUrl":"https://doi.org/10.1002/nla.2514","url":null,"abstract":"Abstract We consider a block‐structured multigrid method based on Braess–Sarazin relaxation for solving the Stokes–Darcy Brinkman equations discretized by the marker and cell scheme. In the relaxation scheme, an element‐based additive Vanka operator is used to approximate the inverse of the corresponding shifted Laplacian operator involved in the discrete Stokes–Darcy Brinkman system. Using local Fourier analysis, we present the stencil for the additive Vanka smoother and derive an optimal smoothing factor for Vanka‐based Braess–Sarazin relaxation for the Stokes–Darcy Brinkman equations. Although the optimal damping parameter is dependent on meshsize and physical parameter, it is very close to one. In practice, we find that using three sweeps of Jacobi relaxation on the Schur complement system is sufficient. Numerical results of two‐grid and V(1,1)‐cycle are presented, which show high efficiency of the proposed relaxation scheme and its robustness to physical parameters and the meshsize. Using a damping parameter equal to one gives almost the same convergence results as these for the optimal damping parameter.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135752383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CP decomposition for tensors via alternating least squares with QR decomposition 用QR分解交替最小二乘对张量进行CP分解
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-06-05 DOI: 10.1002/nla.2511
Rachel Minster, Irina Viviano, Xiaotian Liu, Grey Ballard
The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low‐rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill‐conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP‐ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP‐ALS subproblems efficiently, have the same complexity as the standard CP‐ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill‐conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.
{"title":"CP decomposition for tensors via alternating least squares with QR decomposition","authors":"Rachel Minster, Irina Viviano, Xiaotian Liu, Grey Ballard","doi":"10.1002/nla.2511","DOIUrl":"https://doi.org/10.1002/nla.2511","url":null,"abstract":"The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low‐rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill‐conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP‐ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP‐ALS subproblems efficiently, have the same complexity as the standard CP‐ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill‐conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48192707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Convergence acceleration of preconditioned conjugate gradient solver based on error vector sampling for a sequence of linear systems 基于误差向量采样的线性系统预处理共轭梯度求解器的收敛加速
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-05-31 DOI: 10.1002/nla.2512
Takeshi Iwashita, Kota Ikehara, Takeshi Fukaya, T. Mifune
In this article, we focus on solving a sequence of linear systems that have identical (or similar) coefficient matrices. For this type of problem, we investigate subspace correction (SC) and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the auxiliary matrix contains eigenspaces corresponding to small eigenvalues of the coefficient matrix. We develop a new algebraic auxiliary matrix construction method based on error vector sampling in which eigenvectors with small eigenvalues are efficiently identified in the solution process. We use the generated auxiliary matrix for convergence acceleration in the following solution step. Numerical tests confirm that both SC and deflation methods with the auxiliary matrix can accelerate the solution process of the iterative solver. Furthermore, we examine the applicability of our technique to the estimation of the condition number of the coefficient matrix. We also present the algorithm of the preconditioned conjugate gradient method with condition number estimation.
{"title":"Convergence acceleration of preconditioned conjugate gradient solver based on error vector sampling for a sequence of linear systems","authors":"Takeshi Iwashita, Kota Ikehara, Takeshi Fukaya, T. Mifune","doi":"10.1002/nla.2512","DOIUrl":"https://doi.org/10.1002/nla.2512","url":null,"abstract":"In this article, we focus on solving a sequence of linear systems that have identical (or similar) coefficient matrices. For this type of problem, we investigate subspace correction (SC) and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the auxiliary matrix contains eigenspaces corresponding to small eigenvalues of the coefficient matrix. We develop a new algebraic auxiliary matrix construction method based on error vector sampling in which eigenvectors with small eigenvalues are efficiently identified in the solution process. We use the generated auxiliary matrix for convergence acceleration in the following solution step. Numerical tests confirm that both SC and deflation methods with the auxiliary matrix can accelerate the solution process of the iterative solver. Furthermore, we examine the applicability of our technique to the estimation of the condition number of the coefficient matrix. We also present the algorithm of the preconditioned conjugate gradient method with condition number estimation.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44571147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilevel‐in‐width training for deep neural network regression 深度神经网络回归的多层次宽度训练
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-05-19 DOI: 10.1002/nla.2501
Colin Ponce, Ruipeng Li, Christina Mao, P. Vassilevski
A common challenge in regression is that for many problems, the degrees of freedom required for a high‐quality solution also allows for overfitting. Regularization is a class of strategies that seek to restrict the range of possible solutions so as to discourage overfitting while still enabling good solutions, and different regularization strategies impose different types of restrictions. In this paper, we present a multilevel regularization strategy that constructs and trains a hierarchy of neural networks, each of which has layers that are wider versions of the previous network's layers. We draw intuition and techniques from the field of Algebraic Multigrid (AMG), traditionally used for solving linear and nonlinear systems of equations, and specifically adapt the Full Approximation Scheme (FAS) for nonlinear systems of equations to the problem of deep learning. Training through V‐cycles then encourage the neural networks to build a hierarchical understanding of the problem. We refer to this approach as multilevel‐in‐width to distinguish from prior multilevel works which hierarchically alter the depth of neural networks. The resulting approach is a highly flexible framework that can be applied to a variety of layer types, which we demonstrate with both fully connected and convolutional layers. We experimentally show with PDE regression problems that our multilevel training approach is an effective regularizer, improving the generalize performance of the neural networks studied.
回归中的一个常见挑战是,对于许多问题,高质量解决方案所需的自由度也允许过拟合。正则化是一类策略,它寻求限制可能解的范围,以阻止过度拟合,同时仍然允许良好的解,不同的正则化策略施加不同类型的限制。在本文中,我们提出了一种多层正则化策略,该策略构建和训练神经网络的层次结构,每个神经网络的层都是前一个网络层的更宽版本。我们从代数多重网格(AMG)领域汲取直觉和技术,传统上用于求解线性和非线性方程组,并特别将非线性方程组的全近似方案(FAS)应用于深度学习问题。然后,通过V循环进行训练,鼓励神经网络建立对问题的分层理解。我们将这种方法称为多层宽,以区别于以前的多层工作,这些工作分层地改变神经网络的深度。由此产生的方法是一个高度灵活的框架,可以应用于各种层类型,我们用完全连接层和卷积层来演示。我们通过PDE回归问题的实验表明,我们的多层训练方法是一种有效的正则化器,提高了所研究神经网络的泛化性能。
{"title":"Multilevel‐in‐width training for deep neural network regression","authors":"Colin Ponce, Ruipeng Li, Christina Mao, P. Vassilevski","doi":"10.1002/nla.2501","DOIUrl":"https://doi.org/10.1002/nla.2501","url":null,"abstract":"A common challenge in regression is that for many problems, the degrees of freedom required for a high‐quality solution also allows for overfitting. Regularization is a class of strategies that seek to restrict the range of possible solutions so as to discourage overfitting while still enabling good solutions, and different regularization strategies impose different types of restrictions. In this paper, we present a multilevel regularization strategy that constructs and trains a hierarchy of neural networks, each of which has layers that are wider versions of the previous network's layers. We draw intuition and techniques from the field of Algebraic Multigrid (AMG), traditionally used for solving linear and nonlinear systems of equations, and specifically adapt the Full Approximation Scheme (FAS) for nonlinear systems of equations to the problem of deep learning. Training through V‐cycles then encourage the neural networks to build a hierarchical understanding of the problem. We refer to this approach as multilevel‐in‐width to distinguish from prior multilevel works which hierarchically alter the depth of neural networks. The resulting approach is a highly flexible framework that can be applied to a variety of layer types, which we demonstrate with both fully connected and convolutional layers. We experimentally show with PDE regression problems that our multilevel training approach is an effective regularizer, improving the generalize performance of the neural networks studied.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42963576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preconditioned tensor format conjugate gradient squared and biconjugate gradient stabilized methods for solving stein tensor equations 预条件张量格式求解stein张量方程的共轭梯度平方和双共轭梯度稳定方法
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-05-10 DOI: 10.1002/nla.2502
Yuhan Chen, Chenliang Li
This article is concerned with solving the high order Stein tensor equation arising in control theory. The conjugate gradient squared (CGS) method and the biconjugate gradient stabilized (BiCGSTAB) method are attractive methods for solving linear systems. Compared with the large‐scale matrix equation, the equivalent tensor equation needs less storage space and computational costs. Therefore, we present the tensor formats of CGS and BiCGSTAB methods for solving high order Stein tensor equations. Moreover, a nearest Kronecker product preconditioner is given and the preconditioned tensor format methods are studied. Finally, the feasibility and effectiveness of the new methods are verified by some numerical examples.
本文讨论了控制理论中的高阶Stein张量方程的求解问题。共轭梯度平方(CGS)方法和双共轭梯度稳定(BiCGSTAB)方法是求解线性系统的有吸引力的方法。与大型矩阵方程相比,等效张量方程需要更少的存储空间和计算成本。因此,我们提出了求解高阶Stein张量方程的CGS和BiCGSTAB方法的张量格式。此外,给出了最近Kronecker乘积预条件,并研究了预条件张量格式的方法。最后,通过算例验证了新方法的可行性和有效性。
{"title":"Preconditioned tensor format conjugate gradient squared and biconjugate gradient stabilized methods for solving stein tensor equations","authors":"Yuhan Chen, Chenliang Li","doi":"10.1002/nla.2502","DOIUrl":"https://doi.org/10.1002/nla.2502","url":null,"abstract":"This article is concerned with solving the high order Stein tensor equation arising in control theory. The conjugate gradient squared (CGS) method and the biconjugate gradient stabilized (BiCGSTAB) method are attractive methods for solving linear systems. Compared with the large‐scale matrix equation, the equivalent tensor equation needs less storage space and computational costs. Therefore, we present the tensor formats of CGS and BiCGSTAB methods for solving high order Stein tensor equations. Moreover, a nearest Kronecker product preconditioner is given and the preconditioned tensor format methods are studied. Finally, the feasibility and effectiveness of the new methods are verified by some numerical examples.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47029344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Issue Information 问题信息
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-04-02 DOI: 10.1002/nla.2451
{"title":"Issue Information","authors":"","doi":"10.1002/nla.2451","DOIUrl":"https://doi.org/10.1002/nla.2451","url":null,"abstract":"","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42239219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anderson accelerated fixed‐point iteration for multilinear PageRank 多线性PageRank的Anderson加速不动点迭代
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-03-28 DOI: 10.1002/nla.2499
Fuqi Lai, Wen Li, Xiaofei Peng, Yannan Chen
In this paper, we apply the Anderson acceleration technique to the existing relaxation fixed‐point iteration for solving the multilinear PageRank. In order to reduce computational cost, we further consider the periodical version of the Anderson acceleration. The convergence of the proposed algorithms is discussed. Numerical experiments on synthetic and real‐world datasets are performed to demonstrate the advantages of the proposed algorithms over the relaxation fixed‐point iteration and the extrapolated shifted fixed‐point method. In particular, we give a strategy for choosing the quasi‐optimal parameters of the associated algorithms when they are applied to solve the test problems with different sizes but the same structure.
在本文中,我们将Anderson加速技术应用于现有的松弛不动点迭代,以求解多线性PageRank。为了降低计算成本,我们进一步考虑了Anderson加速度的周期性版本。讨论了所提出算法的收敛性。在合成和真实数据集上进行了数值实验,以证明所提出的算法相对于松弛不动点迭代和外推移位不动点方法的优势。特别地,我们给出了一种策略,用于选择相关算法的拟最优参数,当它们被应用于解决具有不同大小但相同结构的测试问题时。
{"title":"Anderson accelerated fixed‐point iteration for multilinear PageRank","authors":"Fuqi Lai, Wen Li, Xiaofei Peng, Yannan Chen","doi":"10.1002/nla.2499","DOIUrl":"https://doi.org/10.1002/nla.2499","url":null,"abstract":"In this paper, we apply the Anderson acceleration technique to the existing relaxation fixed‐point iteration for solving the multilinear PageRank. In order to reduce computational cost, we further consider the periodical version of the Anderson acceleration. The convergence of the proposed algorithms is discussed. Numerical experiments on synthetic and real‐world datasets are performed to demonstrate the advantages of the proposed algorithms over the relaxation fixed‐point iteration and the extrapolated shifted fixed‐point method. In particular, we give a strategy for choosing the quasi‐optimal parameters of the associated algorithms when they are applied to solve the test problems with different sizes but the same structure.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49267587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence analysis of a block preconditioned steepest descent eigensolver with implicit deflation 具有隐式紧缩的块预条件最陡下降特征解的收敛性分析
3区 数学 Q1 MATHEMATICS Pub Date : 2023-03-15 DOI: 10.1002/nla.2498
Ming Zhou, Zhaojun Bai, Yunfeng Cai, Klaus Neymeyr
Abstract Gradient‐type iterative methods for solving Hermitian eigenvalue problems can be accelerated by using preconditioning and deflation techniques. A preconditioned steepest descent iteration with implicit deflation (PSD‐id) is one of such methods. The convergence behavior of the PSD‐id is recently investigated based on the pioneering work of Samokish on the preconditioned steepest descent method (PSD). The resulting non‐asymptotic estimates indicate a superlinear convergence of the PSD‐id under strong assumptions on the initial guess. The present paper utilizes an alternative convergence analysis of the PSD by Neymeyr under much weaker assumptions. We embed Neymeyr's approach into the analysis of the PSD‐id using a restricted formulation of the PSD‐id. More importantly, we extend the new convergence analysis of the PSD‐id to a practically preferred block version of the PSD‐id, or BPSD‐id, and show the cluster robustness of the BPSD‐id. Numerical examples are provided to validate the theoretical estimates.
摘要梯度型迭代法求解厄米特征值问题的速度可以通过预处理和压缩技术来加快。带有隐式紧缩的预条件最陡下降迭代(PSD‐id)就是其中一种方法。基于Samokish关于预条件最速下降法(preconditioned最速下降法,PSD)的开创性工作,最近对PSD - id的收敛性进行了研究。所得的非渐近估计表明,在初始猜想的强假设下,PSD - id具有超线性收敛性。本文利用Neymeyr在弱得多的假设下对PSD的另一种收敛性分析。我们使用PSD - id的限制性公式将Neymeyr的方法嵌入到PSD - id的分析中。更重要的是,我们将新的PSD - id收敛分析扩展到PSD - id或BPSD - id的实际首选块版本,并证明了BPSD - id的簇鲁棒性。数值算例验证了理论估计的正确性。
{"title":"Convergence analysis of a block preconditioned steepest descent eigensolver with implicit deflation","authors":"Ming Zhou, Zhaojun Bai, Yunfeng Cai, Klaus Neymeyr","doi":"10.1002/nla.2498","DOIUrl":"https://doi.org/10.1002/nla.2498","url":null,"abstract":"Abstract Gradient‐type iterative methods for solving Hermitian eigenvalue problems can be accelerated by using preconditioning and deflation techniques. A preconditioned steepest descent iteration with implicit deflation (PSD‐id) is one of such methods. The convergence behavior of the PSD‐id is recently investigated based on the pioneering work of Samokish on the preconditioned steepest descent method (PSD). The resulting non‐asymptotic estimates indicate a superlinear convergence of the PSD‐id under strong assumptions on the initial guess. The present paper utilizes an alternative convergence analysis of the PSD by Neymeyr under much weaker assumptions. We embed Neymeyr's approach into the analysis of the PSD‐id using a restricted formulation of the PSD‐id. More importantly, we extend the new convergence analysis of the PSD‐id to a practically preferred block version of the PSD‐id, or BPSD‐id, and show the cluster robustness of the BPSD‐id. Numerical examples are provided to validate the theoretical estimates.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135648575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A block Cholesky‐LU‐based QR factorization for rectangular matrices 基于块Cholesky - LU的矩形矩阵QR分解
IF 4.3 3区 数学 Q1 MATHEMATICS Pub Date : 2023-02-25 DOI: 10.1002/nla.2497
S. Le Borne
The Householder method provides a stable algorithm to compute the full QR factorization of a general matrix. The standard version of the algorithm uses a sequence of orthogonal reflections to transform the matrix into upper triangular form column by column. In order to exploit (level 3 BLAS or structured matrix) computational advantages for block‐partitioned algorithms, we develop a block algorithm for the QR factorization. It is based on a well‐known block version of the Householder method which recursively divides a matrix columnwise into two smaller matrices. However, instead of continuing the recursion down to single matrix columns, we introduce a novel way to compute the QR factors in implicit Householder representation for a larger block of several matrix columns, that is, we start the recursion at a block level instead of a single column. Numerical experiments illustrate to what extent the novel approach trades some of the stability of Householder's method for the computational efficiency of block methods.
Householder方法提供了一种稳定的算法来计算一般矩阵的全QR分解。该算法的标准版本使用一系列正交反射将矩阵逐列转换为上三角形形式。为了利用(3级BLAS或结构化矩阵)块分割算法的计算优势,我们开发了一种用于QR分解的块算法。它基于Householder方法的一个众所周知的块版本,它递归地将矩阵按列划分为两个较小的矩阵。然而,我们没有继续递归到单个矩阵列,而是引入了一种新的方法来计算由几个矩阵列组成的更大块的隐式Householder表示中的QR因子,也就是说,我们从块级别而不是单个列开始递归。数值实验表明,这种新方法在多大程度上牺牲了Householder方法的稳定性来换取块方法的计算效率。
{"title":"A block Cholesky‐LU‐based QR factorization for rectangular matrices","authors":"S. Le Borne","doi":"10.1002/nla.2497","DOIUrl":"https://doi.org/10.1002/nla.2497","url":null,"abstract":"The Householder method provides a stable algorithm to compute the full QR factorization of a general matrix. The standard version of the algorithm uses a sequence of orthogonal reflections to transform the matrix into upper triangular form column by column. In order to exploit (level 3 BLAS or structured matrix) computational advantages for block‐partitioned algorithms, we develop a block algorithm for the QR factorization. It is based on a well‐known block version of the Householder method which recursively divides a matrix columnwise into two smaller matrices. However, instead of continuing the recursion down to single matrix columns, we introduce a novel way to compute the QR factors in implicit Householder representation for a larger block of several matrix columns, that is, we start the recursion at a block level instead of a single column. Numerical experiments illustrate to what extent the novel approach trades some of the stability of Householder's method for the computational efficiency of block methods.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46945924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Numerical Linear Algebra with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1