The present work is devoted to the construction of an asymptotic expansion for the eigenvalues of a Toeplitz matrix Tn(a)$$ {T}_n(a) $$ as n$$ n $$ goes to infinity, with a continuous and real‐valued symbol a$$ a $$ having a power singularity of degree γ$$ gamma $$ with 1
本文研究了Toeplitz矩阵n(a)的特征值渐近展开式的构造。$$ {T}_n(a) $$ As n$$ n $$ 趋于无穷,具有连续实值符号a$$ a $$ 具有γ次幂奇点的$$ gamma $$ 1
{"title":"Asymptotics for the eigenvalues of Toeplitz matrices with a symbol having a power singularity","authors":"M. Bogoya, S. Grudsky","doi":"10.1002/nla.2496","DOIUrl":"https://doi.org/10.1002/nla.2496","url":null,"abstract":"The present work is devoted to the construction of an asymptotic expansion for the eigenvalues of a Toeplitz matrix Tn(a)$$ {T}_n(a) $$ as n$$ n $$ goes to infinity, with a continuous and real‐valued symbol a$$ a $$ having a power singularity of degree γ$$ gamma $$ with 1","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"30 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41510287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Tensor numerical methods and their application in scientific computing and data science","authors":"B. Khoromskij, V. Khoromskaia","doi":"10.1002/nla.2493","DOIUrl":"https://doi.org/10.1002/nla.2493","url":null,"abstract":",","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49364141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issue Information","authors":"","doi":"10.1002/nla.2450","DOIUrl":"https://doi.org/10.1002/nla.2450","url":null,"abstract":"","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48420131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article proposes a structure‐preserving quaternion full orthogonalization method (QFOM) for solving quaternion linear systems arising from color image restoration. The method is based on the quaternion Arnoldi procedure preserving the quaternion Hessenberg form. Combining with the preconditioning techniques, we further derive a variant of the QFOM for solving the linear systems, which can greatly improve the rate of convergence of QFOM. Numerical experiments on randomly generated data and color image restoration problems illustrate the effectiveness of the proposed algorithms in comparison with some existing methods.
{"title":"Structure preserving quaternion full orthogonalization method with applications","authors":"Tao Li, Qingwen Wang","doi":"10.1002/nla.2495","DOIUrl":"https://doi.org/10.1002/nla.2495","url":null,"abstract":"This article proposes a structure‐preserving quaternion full orthogonalization method (QFOM) for solving quaternion linear systems arising from color image restoration. The method is based on the quaternion Arnoldi procedure preserving the quaternion Hessenberg form. Combining with the preconditioning techniques, we further derive a variant of the QFOM for solving the linear systems, which can greatly improve the rate of convergence of QFOM. Numerical experiments on randomly generated data and color image restoration problems illustrate the effectiveness of the proposed algorithms in comparison with some existing methods.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46859199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For some families of totally positive matrices using Γ$$ Gamma $$ and β$$ beta $$ functions, we provide their bidiagonal factorization. Moreover, when these functions are defined over integers, we prove that the bidiagonal factorization can be computed with high relative accuracy and so we can compute with high relative accuracy their eigenvalues, singular values, inverses and the solutions of some associated linear systems. We provide numerical examples illustrating this high relative accuracy.
{"title":"High relative accuracy with some special matrices related to Γ and β functions","authors":"J. Delgado, J. Peña","doi":"10.1002/nla.2494","DOIUrl":"https://doi.org/10.1002/nla.2494","url":null,"abstract":"For some families of totally positive matrices using Γ$$ Gamma $$ and β$$ beta $$ functions, we provide their bidiagonal factorization. Moreover, when these functions are defined over integers, we prove that the bidiagonal factorization can be computed with high relative accuracy and so we can compute with high relative accuracy their eigenvalues, singular values, inverses and the solutions of some associated linear systems. We provide numerical examples illustrating this high relative accuracy.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48028882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the singular value decomposition (SVD) is the standard method of choice for solving the nearest rotation matrix problem. Nevertheless, many other methods are available in the literature for the 3D case. This article reviews the most representative ones, proposes alternative ones, and presents a comparative analysis to elucidate their relative computational costs and error performances. This analysis leads to the conclusion that some algebraic closed‐form methods are as robust as the SVD, but significantly faster and more accurate.
{"title":"Solution methods to the nearest rotation matrix problem in ℝ3 : A comparative survey","authors":"Soheil Sarabandi, Federico Thomas","doi":"10.1002/nla.2492","DOIUrl":"https://doi.org/10.1002/nla.2492","url":null,"abstract":"Nowadays, the singular value decomposition (SVD) is the standard method of choice for solving the nearest rotation matrix problem. Nevertheless, many other methods are available in the literature for the 3D case. This article reviews the most representative ones, proposes alternative ones, and presents a comparative analysis to elucidate their relative computational costs and error performances. This analysis leads to the conclusion that some algebraic closed‐form methods are as robust as the SVD, but significantly faster and more accurate.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47907040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate a variant of the reorthogonalized block classical Gram–Schmidt method for computing the QR factorization of a full column rank matrix. Our aim is to bound the loss of orthogonality even when the first local QR algorithm is only conditionally stable. In particular, this allows the use of modified Gram–Schmidt instead of Householder transformations as the first local QR algorithm. Numerical experiments confirm the stable behavior of the new variant. We also examine the use of non‐QR local factorization and show by example that the resulting variants, although less stable, may also be applied to ill‐conditioned problems.
{"title":"A flexible block classical Gram–Schmidt skeleton with reorthogonalization","authors":"Qinmeng Zou","doi":"10.1002/nla.2491","DOIUrl":"https://doi.org/10.1002/nla.2491","url":null,"abstract":"We investigate a variant of the reorthogonalized block classical Gram–Schmidt method for computing the QR factorization of a full column rank matrix. Our aim is to bound the loss of orthogonality even when the first local QR algorithm is only conditionally stable. In particular, this allows the use of modified Gram–Schmidt instead of Householder transformations as the first local QR algorithm. Numerical experiments confirm the stable behavior of the new variant. We also examine the use of non‐QR local factorization and show by example that the resulting variants, although less stable, may also be applied to ill‐conditioned problems.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43722134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main aim of this paper is to develop a nonconvex optimization model for third‐order tensor completion under wavelet transform. On the one hand, through wavelet transform of frontal slices, we divide a large tensor data into a main part tensor and three detail part tensors, and the elements of these four tensors are about a quarter of the original tensors. Solving these four small tensors can not only improve the operation efficiency, but also better restore the original tensor data. On the other hand, by using concave correction term, we are able to correct for low rank of tubal nuclear norm (TNN) data fidelity term and sparsity of l1$$ {l}_1 $$ ‐norm data fidelity term. We prove that the proposed algorithm can converge to some critical point. Experimental results on image, magnetic resonance imaging and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state‐of‐the‐arts including the TNN and other methods.
{"title":"Nonconvex optimization for third‐order tensor completion under wavelet transform","authors":"Quan Yu, Minru Bai","doi":"10.1002/nla.2489","DOIUrl":"https://doi.org/10.1002/nla.2489","url":null,"abstract":"The main aim of this paper is to develop a nonconvex optimization model for third‐order tensor completion under wavelet transform. On the one hand, through wavelet transform of frontal slices, we divide a large tensor data into a main part tensor and three detail part tensors, and the elements of these four tensors are about a quarter of the original tensors. Solving these four small tensors can not only improve the operation efficiency, but also better restore the original tensor data. On the other hand, by using concave correction term, we are able to correct for low rank of tubal nuclear norm (TNN) data fidelity term and sparsity of l1$$ {l}_1 $$ ‐norm data fidelity term. We prove that the proposed algorithm can converge to some critical point. Experimental results on image, magnetic resonance imaging and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state‐of‐the‐arts including the TNN and other methods.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48662560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce the two‐sided Rayleigh quotient shift to the QR algorithm for non‐Hermitian matrices to achieve a cubic local convergence rate. For the singly shifted case, the two‐sided Rayleigh quotient iteration is incorporated into the QR iteration. A modified version of the method and its truncated version are developed to improve the efficiency. Based on the observation that the Francis double‐shift QR iteration is related to a 2D Grassmann–Rayleigh quotient iteration, A doubly shifted QR algorithm with the two‐sided 2D Grassmann–Rayleigh quotient double‐shift is proposed. A modified version of the method and its truncated version are also developed. Numerical examples are presented to show the convergence behavior of the proposed algorithms. Numerical examples also show that the truncated versions of the modified methods outperform their counterparts including the standard Rayleigh quotient single‐shift and the Francis double‐shift.
{"title":"QR algorithm with two‐sided Rayleigh quotient shifts","authors":"X. Chen, Hongguo Xu","doi":"10.1002/nla.2487","DOIUrl":"https://doi.org/10.1002/nla.2487","url":null,"abstract":"We introduce the two‐sided Rayleigh quotient shift to the QR algorithm for non‐Hermitian matrices to achieve a cubic local convergence rate. For the singly shifted case, the two‐sided Rayleigh quotient iteration is incorporated into the QR iteration. A modified version of the method and its truncated version are developed to improve the efficiency. Based on the observation that the Francis double‐shift QR iteration is related to a 2D Grassmann–Rayleigh quotient iteration, A doubly shifted QR algorithm with the two‐sided 2D Grassmann–Rayleigh quotient double‐shift is proposed. A modified version of the method and its truncated version are also developed. Numerical examples are presented to show the convergence behavior of the proposed algorithms. Numerical examples also show that the truncated versions of the modified methods outperform their counterparts including the standard Rayleigh quotient single‐shift and the Francis double‐shift.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41684014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tensor completion originates in numerous applications where data utilized are of high dimensions and gathered from multiple sources or views. Existing methods merely incorporate the structure information, ignoring the fact that ubiquitous side information may be beneficial to estimate the missing entries from a partially observed tensor. Inspired by this, we formulate a sparse and low‐rank tensor completion model named SLRMV. The ℓ0$$ {ell}_0 $$ ‐norm instead of its relaxation is used in the objective function to constrain the sparseness of noise. The CP decomposition is used to decompose the high‐quality tensor, based on which the combination of Schatten p$$ p $$ ‐norm on each latent factor matrix is employed to characterize the low‐rank tensor structure with high computation efficiency. Diverse similarity matrices for the same factor matrix are regarded as multi‐view side information for guiding the tensor completion task. Although SLRMV is a nonconvex and discontinuous problem, the optimality analysis in terms of Karush‐Kuhn‐Tucker (KKT) conditions is accordingly proposed, based on which a hard‐thresholding based alternating direction method of multipliers (HT‐ADMM) is designed. Extensive experiments remarkably demonstrate the efficiency of SLRMV in tensor completion.
张量补全起源于许多应用程序,其中使用的数据是高维的,并且是从多个来源或视图收集的。现有的方法仅仅包含结构信息,而忽略了无处不在的侧信息可能有助于估计部分观测张量的缺失项。受此启发,我们建立了一个稀疏的低秩张量补全模型,命名为SLRMV。在目标函数中,用0 $$ {ell}_0 $$‐范数代替其松弛来约束噪声的稀疏性。利用CP分解对高质量张量进行分解,在此基础上利用每个潜在因子矩阵上的Schatten p $$ p $$范数组合表征低秩张量结构,计算效率高。将同一因子矩阵的不同相似矩阵作为多视图侧信息,指导张量补全任务。尽管SLRMV是一个非凸不连续问题,但我们提出了Karush - Kuhn - Tucker (KKT)条件下的最优性分析,并在此基础上设计了一种基于硬阈值的乘法器交替方向法(HT - ADMM)。大量的实验证明了SLRMV在张量补全方面的有效性。
{"title":"Multi‐view side information‐incorporated tensor completion","authors":"Yingjie Tian, Xiaotong Yu, Saiji Fu","doi":"10.1002/nla.2485","DOIUrl":"https://doi.org/10.1002/nla.2485","url":null,"abstract":"Tensor completion originates in numerous applications where data utilized are of high dimensions and gathered from multiple sources or views. Existing methods merely incorporate the structure information, ignoring the fact that ubiquitous side information may be beneficial to estimate the missing entries from a partially observed tensor. Inspired by this, we formulate a sparse and low‐rank tensor completion model named SLRMV. The ℓ0$$ {ell}_0 $$ ‐norm instead of its relaxation is used in the objective function to constrain the sparseness of noise. The CP decomposition is used to decompose the high‐quality tensor, based on which the combination of Schatten p$$ p $$ ‐norm on each latent factor matrix is employed to characterize the low‐rank tensor structure with high computation efficiency. Diverse similarity matrices for the same factor matrix are regarded as multi‐view side information for guiding the tensor completion task. Although SLRMV is a nonconvex and discontinuous problem, the optimality analysis in terms of Karush‐Kuhn‐Tucker (KKT) conditions is accordingly proposed, based on which a hard‐thresholding based alternating direction method of multipliers (HT‐ADMM) is designed. Extensive experiments remarkably demonstrate the efficiency of SLRMV in tensor completion.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43167030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}