Fernando De Terán, Andrii Dmytryshyn, Froilán M. Dopico
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 260-283, March 2024. Abstract. We obtain the generic complete eigenstructures of complex Hermitian [math] matrix pencils with rank at most [math] (with [math]). To do this, we prove that the set of such pencils is the union of a finite number of bundle closures, where each bundle is the set of complex Hermitian [math] pencils with the same complete eigenstructure (up to the specific values of the distinct finite eigenvalues). We also obtain the explicit number of such bundles and their codimension. The cases [math], corresponding to general Hermitian pencils, and [math] exhibit surprising differences, since for [math] the generic complete eigenstructures can contain only real eigenvalues, while for [math] they can contain real and nonreal eigenvalues. Moreover, we will see that the sign characteristic of the real eigenvalues plays a relevant role for determining the generic eigenstructures.
{"title":"Generic Eigenstructures of Hermitian Pencils","authors":"Fernando De Terán, Andrii Dmytryshyn, Froilán M. Dopico","doi":"10.1137/22m1523297","DOIUrl":"https://doi.org/10.1137/22m1523297","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 260-283, March 2024. <br/> Abstract. We obtain the generic complete eigenstructures of complex Hermitian [math] matrix pencils with rank at most [math] (with [math]). To do this, we prove that the set of such pencils is the union of a finite number of bundle closures, where each bundle is the set of complex Hermitian [math] pencils with the same complete eigenstructure (up to the specific values of the distinct finite eigenvalues). We also obtain the explicit number of such bundles and their codimension. The cases [math], corresponding to general Hermitian pencils, and [math] exhibit surprising differences, since for [math] the generic complete eigenstructures can contain only real eigenvalues, while for [math] they can contain real and nonreal eigenvalues. Moreover, we will see that the sign characteristic of the real eigenvalues plays a relevant role for determining the generic eigenstructures.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 232-259, March 2024. Abstract. The joint bidiagonalization (JBD) process iteratively reduces a matrix pair [math] to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of [math]. The process has a nested inner-outer iteration structure, where the inner iteration usually cannot be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating the influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depends on the computing accuracy of inner iterations and the condition number of [math], while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results.
{"title":"The Joint Bidiagonalization of a Matrix Pair with Inaccurate Inner Iterations","authors":"Haibo Li","doi":"10.1137/22m1541083","DOIUrl":"https://doi.org/10.1137/22m1541083","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 232-259, March 2024. <br/> Abstract. The joint bidiagonalization (JBD) process iteratively reduces a matrix pair [math] to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of [math]. The process has a nested inner-outer iteration structure, where the inner iteration usually cannot be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating the influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depends on the computing accuracy of inner iterations and the condition number of [math], while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 203-231, March 2024. Abstract. Deflation techniques are typically used to shift isolated clusters of small eigenvalues in order to obtain a tighter distribution and a smaller condition number. Such changes induce a positive effect in the convergence behavior of Krylov subspace methods, which are among the most popular iterative solvers for large sparse linear systems. We develop a deflation strategy for symmetric saddle point matrices by taking advantage of their underlying block structure. The vectors used for deflation come from an elliptic singular value decomposition relying on the generalized Golub–Kahan bidiagonalization process. The block targeted by deflation is the off-diagonal one since it features a problematic singular value distribution for certain applications. One example is the Stokes flow in elongated channels, where the off-diagonal block has several small, isolated singular values, depending on the length of the channel. Applying deflation to specific parts of the saddle point system is important when using solvers such as CRAIG, which operates on individual blocks rather than the whole system. The theory is developed by extending the existing framework for deflating square matrices before applying a Krylov subspace method such as MINRES. Numerical experiments confirm the merits of our strategy and lead to interesting questions about using approximate vectors for deflation.
{"title":"Deflation for the Off-Diagonal Block in Symmetric Saddle Point Systems","authors":"Andrei Dumitrasc, Carola Kruse, Ulrich Rüde","doi":"10.1137/22m1537266","DOIUrl":"https://doi.org/10.1137/22m1537266","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 203-231, March 2024. <br/> Abstract. Deflation techniques are typically used to shift isolated clusters of small eigenvalues in order to obtain a tighter distribution and a smaller condition number. Such changes induce a positive effect in the convergence behavior of Krylov subspace methods, which are among the most popular iterative solvers for large sparse linear systems. We develop a deflation strategy for symmetric saddle point matrices by taking advantage of their underlying block structure. The vectors used for deflation come from an elliptic singular value decomposition relying on the generalized Golub–Kahan bidiagonalization process. The block targeted by deflation is the off-diagonal one since it features a problematic singular value distribution for certain applications. One example is the Stokes flow in elongated channels, where the off-diagonal block has several small, isolated singular values, depending on the length of the channel. Applying deflation to specific parts of the saddle point system is important when using solvers such as CRAIG, which operates on individual blocks rather than the whole system. The theory is developed by extending the existing framework for deflating square matrices before applying a Krylov subspace method such as MINRES. Numerical experiments confirm the merits of our strategy and lead to interesting questions about using approximate vectors for deflation.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 167-202, March 2024. Abstract. Characterizing simultaneously diagonalizable (SD) matrices has been receiving considerable attention in recent decades due to its wide applications and its role in matrix analysis. However, the notion of SD matrices is arguably still restrictive for wider applications. In this paper, we consider two error measures related to the simultaneous diagonalization of matrices and propose several new variants of SD thereof; in particular, TWSD, TWSD-B, [math]-SD (SDO), DWSD, and [math]-SD (SDO). Those are all weaker forms of SD. We derive various sufficient and/or necessary conditions of them under different assumptions and show the relationships between these new notions. Finally, we discuss the applications of these new notions in, e.g., quadratically constrained quadratic programming and independent component analysis.
{"title":"Projectively and Weakly Simultaneously Diagonalizable Matrices and their Applications","authors":"Wentao Ding, Jianze Li, Shuzhong Zhang","doi":"10.1137/22m1507656","DOIUrl":"https://doi.org/10.1137/22m1507656","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 167-202, March 2024. <br/> Abstract. Characterizing simultaneously diagonalizable (SD) matrices has been receiving considerable attention in recent decades due to its wide applications and its role in matrix analysis. However, the notion of SD matrices is arguably still restrictive for wider applications. In this paper, we consider two error measures related to the simultaneous diagonalization of matrices and propose several new variants of SD thereof; in particular, TWSD, TWSD-B, [math]-SD (SDO), DWSD, and [math]-SD (SDO). Those are all weaker forms of SD. We derive various sufficient and/or necessary conditions of them under different assumptions and show the relationships between these new notions. Finally, we discuss the applications of these new notions in, e.g., quadratically constrained quadratic programming and independent component analysis.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Amestoy, Olivier Boiteau, Alfredo Buttari, Matthieu Gerest, Fabienne Jézéquel, Jean-Yves L’Excellent, Theo Mary
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 148-166, March 2024. Abstract. Block low-rank (BLR) compression can significantly reduce the memory and time costs of parallel sparse direct solvers. In this paper, we investigate the performance of the BLR triangular solve phase, which we observe to be underwhelming when dealing with many right-hand sides (RHS). We explain that this is because the bottleneck of the triangular solve is not in accessing the BLR LU factors, but rather in accessing the RHS, which are uncompressed. Motivated by this finding, we propose several new hybrid variants, which combine the right-looking and left-looking communication patterns to minimize the number of accesses to the RHS. We confirm via a theoretical analysis that these new variants can significantly reduce the total communication volume. We assess the impact of this reduction on the time performance on a range of real-life applications using the MUMPS solver, obtaining up to 20% time reduction.
{"title":"Communication Avoiding Block Low-Rank Parallel Multifrontal Triangular Solve with Many Right-Hand Sides","authors":"Patrick Amestoy, Olivier Boiteau, Alfredo Buttari, Matthieu Gerest, Fabienne Jézéquel, Jean-Yves L’Excellent, Theo Mary","doi":"10.1137/23m1568600","DOIUrl":"https://doi.org/10.1137/23m1568600","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 148-166, March 2024. <br/> Abstract. Block low-rank (BLR) compression can significantly reduce the memory and time costs of parallel sparse direct solvers. In this paper, we investigate the performance of the BLR triangular solve phase, which we observe to be underwhelming when dealing with many right-hand sides (RHS). We explain that this is because the bottleneck of the triangular solve is not in accessing the BLR LU factors, but rather in accessing the RHS, which are uncompressed. Motivated by this finding, we propose several new hybrid variants, which combine the right-looking and left-looking communication patterns to minimize the number of accesses to the RHS. We confirm via a theoretical analysis that these new variants can significantly reduce the total communication volume. We assess the impact of this reduction on the time performance on a range of real-life applications using the MUMPS solver, obtaining up to 20% time reduction.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139462130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 112-133, March 2024. Abstract. The problem of multiway partitioning of an undirected graph is considered. A spectral method is used, where the [math] largest eigenvalues of the normalized adjacency matrix (equivalently, the [math] smallest eigenvalues of the normalized graph Laplacian) are computed. It is shown that the information necessary for partitioning is contained in the subspace spanned by the [math] eigenvectors. The partitioning is encoded in a matrix [math] in indicator form, which is computed by approximating the eigenvector matrix by a product of [math] and an orthogonal matrix. A measure of the distance of a graph to being [math]-partitionable is defined, as well as two cut (cost) functions, for which Cheeger inequalities are proved; thus the relation between the eigenvalue and partitioning problems is established. Numerical examples are given that demonstrate that the partitioning algorithm is efficient and robust.
{"title":"Multiway Spectral Graph Partitioning: Cut Functions, Cheeger Inequalities, and a Simple Algorithm","authors":"Lars Eldén","doi":"10.1137/23m1551936","DOIUrl":"https://doi.org/10.1137/23m1551936","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 112-133, March 2024. <br/> Abstract. The problem of multiway partitioning of an undirected graph is considered. A spectral method is used, where the [math] largest eigenvalues of the normalized adjacency matrix (equivalently, the [math] smallest eigenvalues of the normalized graph Laplacian) are computed. It is shown that the information necessary for partitioning is contained in the subspace spanned by the [math] eigenvectors. The partitioning is encoded in a matrix [math] in indicator form, which is computed by approximating the eigenvector matrix by a product of [math] and an orthogonal matrix. A measure of the distance of a graph to being [math]-partitionable is defined, as well as two cut (cost) functions, for which Cheeger inequalities are proved; thus the relation between the eigenvalue and partitioning problems is established. Numerical examples are given that demonstrate that the partitioning algorithm is efficient and robust.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139462057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 134-147, March 2024. Abstract. The elastic potential is a valuable modeling tool for many applications, including medical imaging. One reason for this is that the energy and its Gâteaux derivative, the elastic operator, have strong coupling properties. Although these properties are desirable from a modeling perspective, they are not advantageous from a computational or operator decomposition perspective. In this paper, we show that the elastic operator can be spectrally decomposed despite its coupling property when equipped with sliding boundary conditions. Moreover, we present a discretization that is fully compatible with this spectral decomposition. In particular, for image registration problems, this decomposition opens new possibilities for multispectral solution techniques and fine-tuned operator-based regularization.
{"title":"The Spectral Decomposition of the Continuous and Discrete Linear Elasticity Operators with Sliding Boundary Conditions","authors":"Jan Modersitzki","doi":"10.1137/22m1541320","DOIUrl":"https://doi.org/10.1137/22m1541320","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 134-147, March 2024. <br/> Abstract. The elastic potential is a valuable modeling tool for many applications, including medical imaging. One reason for this is that the energy and its Gâteaux derivative, the elastic operator, have strong coupling properties. Although these properties are desirable from a modeling perspective, they are not advantageous from a computational or operator decomposition perspective. In this paper, we show that the elastic operator can be spectrally decomposed despite its coupling property when equipped with sliding boundary conditions. Moreover, we present a discretization that is fully compatible with this spectral decomposition. In particular, for image registration problems, this decomposition opens new possibilities for multispectral solution techniques and fine-tuned operator-based regularization.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139462132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 84-111, March 2024. Abstract. This paper concerns a class of monotone eigenvalue problems with eigenvector nonlinearities (mNEPv). The mNEPv is encountered in applications such as the computation of joint numerical radius of matrices, best rank-one approximation of third-order partial-symmetric tensors, and distance to singularity for dissipative Hamiltonian differential-algebraic equations. We first present a variational characterization of the mNEPv. Based on the variational characterization, we provide a geometric interpretation of the self-consistent field (SCF) iterations for solving the mNEPv, prove the global convergence of the SCF, and devise an accelerated SCF. Numerical examples demonstrate theoretical properties and computational efficiency of the SCF and its acceleration.
{"title":"Variational Characterization of Monotone Nonlinear Eigenvector Problems and Geometry of Self-Consistent Field Iteration","authors":"Zhaojun Bai, Ding Lu","doi":"10.1137/22m1525326","DOIUrl":"https://doi.org/10.1137/22m1525326","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 84-111, March 2024. <br/> Abstract. This paper concerns a class of monotone eigenvalue problems with eigenvector nonlinearities (mNEPv). The mNEPv is encountered in applications such as the computation of joint numerical radius of matrices, best rank-one approximation of third-order partial-symmetric tensors, and distance to singularity for dissipative Hamiltonian differential-algebraic equations. We first present a variational characterization of the mNEPv. Based on the variational characterization, we provide a geometric interpretation of the self-consistent field (SCF) iterations for solving the mNEPv, prove the global convergence of the SCF, and devise an accelerated SCF. Numerical examples demonstrate theoretical properties and computational efficiency of the SCF and its acceleration.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139462083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 59-83, March 2024. Abstract. Structure-preserving doubling algorithms (SDAs) are efficient algorithms for solving Riccati-type matrix equations. However, breakdowns may occur in SDAs. To remedy this drawback, in this paper, we first introduce [math]-symplectic forms ([math]-SFs), consisting of symplectic matrix pairs with a Hermitian parametric matrix [math]. Based on [math]-SFs, we develop modified SDAs (MSDAs) for solving the associated Riccati-type equations. MSDAs generate sequences of symplectic matrix pairs in [math]-SFs and prevent breakdowns by employing a reasonably selected Hermitian matrix [math]. In practical implementations, we show that the Hermitian matrix [math] in MSDAs can be chosen as a real diagonal matrix that can reduce the computational complexity. The numerical results demonstrate a significant improvement in the accuracy of the solutions by MSDAs.
{"title":"Structure-Preserving Doubling Algorithms That Avoid Breakdowns for Algebraic Riccati-Type Matrix Equations","authors":"Tsung-Ming Huang, Yueh-Cheng Kuo, Wen-Wei Lin, Shih-Feng Shieh","doi":"10.1137/23m1551791","DOIUrl":"https://doi.org/10.1137/23m1551791","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 59-83, March 2024. <br/> Abstract. Structure-preserving doubling algorithms (SDAs) are efficient algorithms for solving Riccati-type matrix equations. However, breakdowns may occur in SDAs. To remedy this drawback, in this paper, we first introduce [math]-symplectic forms ([math]-SFs), consisting of symplectic matrix pairs with a Hermitian parametric matrix [math]. Based on [math]-SFs, we develop modified SDAs (MSDAs) for solving the associated Riccati-type equations. MSDAs generate sequences of symplectic matrix pairs in [math]-SFs and prevent breakdowns by employing a reasonably selected Hermitian matrix [math]. In practical implementations, we show that the Hermitian matrix [math] in MSDAs can be chosen as a real diagonal matrix that can reduce the computational complexity. The numerical results demonstrate a significant improvement in the accuracy of the solutions by MSDAs.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139415016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 24-58, March 2024. Abstract. The cross-product matrix-based CJ-FEAST SVDsolver proposed previously by the authors is shown to compute the left singular vector possibly much less accurately than the right singular vector and may be numerically backward unstable when a desired singular value is small. In this paper, an alternative augmented matrix-based CJ-FEAST SVDsolver is proposed to compute the singular triplets of a large matrix [math] with the singular values in an interval [math] contained in the singular spectrum. The new CJ-FEAST SVDsolver is a subspace iteration applied to an approximate spectral projector of the augmented matrix [math] associated with the eigenvalues in [math], and it constructs approximate left and right singular subspaces independently, onto which [math] is projected to obtain the Ritz approximations to the desired singular triplets. Compact estimates are given for the accuracy of the approximate spectral projector constructed by the Chebyshev–Jackson series expansion in terms of series degree, and a number of convergence results are established. The new solver is proved to be always numerically backward stable. A convergence comparison of the cross-product-based and augmented matrix-based CJ-FEAST SVDsolvers is made, and a general-purpose choice strategy between the two solvers is proposed for the robustness and overall efficiency. Numerical experiments confirm all the results and meanwhile demonstrate that the proposed solver is more robust and substantially more efficient than the corresponding contour integral-based versions that exploit the trapezoidal rule and the Gauss–Legendre quadrature to construct an approximate spectral projector.
{"title":"An Augmented Matrix-Based CJ-FEAST SVDsolver for Computing a Partial Singular Value Decomposition with the Singular Values in a Given Interval","authors":"Zhongxiao Jia, Kailiang Zhang","doi":"10.1137/23m1547500","DOIUrl":"https://doi.org/10.1137/23m1547500","url":null,"abstract":"SIAM Journal on Matrix Analysis and Applications, Volume 45, Issue 1, Page 24-58, March 2024. <br/> Abstract. The cross-product matrix-based CJ-FEAST SVDsolver proposed previously by the authors is shown to compute the left singular vector possibly much less accurately than the right singular vector and may be numerically backward unstable when a desired singular value is small. In this paper, an alternative augmented matrix-based CJ-FEAST SVDsolver is proposed to compute the singular triplets of a large matrix [math] with the singular values in an interval [math] contained in the singular spectrum. The new CJ-FEAST SVDsolver is a subspace iteration applied to an approximate spectral projector of the augmented matrix [math] associated with the eigenvalues in [math], and it constructs approximate left and right singular subspaces independently, onto which [math] is projected to obtain the Ritz approximations to the desired singular triplets. Compact estimates are given for the accuracy of the approximate spectral projector constructed by the Chebyshev–Jackson series expansion in terms of series degree, and a number of convergence results are established. The new solver is proved to be always numerically backward stable. A convergence comparison of the cross-product-based and augmented matrix-based CJ-FEAST SVDsolvers is made, and a general-purpose choice strategy between the two solvers is proposed for the robustness and overall efficiency. Numerical experiments confirm all the results and meanwhile demonstrate that the proposed solver is more robust and substantially more efficient than the corresponding contour integral-based versions that exploit the trapezoidal rule and the Gauss–Legendre quadrature to construct an approximate spectral projector.","PeriodicalId":49538,"journal":{"name":"SIAM Journal on Matrix Analysis and Applications","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139094747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}