Oren E. Livne, Katherine E. Castellano, Dan F. McCaffrey
SummaryWe present RCO (regularized Cholesky optimization): a numerical algorithm for finding a symmetric positive definite (PD) matrix with a bounded condition number that minimizes an objective function. This task arises when estimating a covariance matrix from noisy data or due to model constraints, which can cause spurious small negative eigenvalues. A special case is the problem of finding the nearest well‐conditioned PD matrix to a given matrix. RCO explicitly optimizes the entries of the Cholesky factor. This requires solving a regularized non‐linear, non‐convex optimization problem, for which we apply Newton‐CG and exploit the Hessian's sparsity. The regularization parameter is determined via numerical continuation with an accuracy‐conditioning trade‐off criterion. We apply RCO to our motivating educational measurement application of estimating the covariance matrix of an empirical best linear prediction (EBLP) of school growth scores. We present numerical results for two empirical datasets, state and urban. RCO outperforms general‐purpose near‐PD algorithms, obtaining ‐smaller matrix reconstruction bias and smaller EBLP estimator mean‐squared error. It is in fact the only algorithm that solves the right minimization problem, which strikes a balance between the objective function and the condition number. RCO can be similarly applied to the stable estimation of other posterior means. For the task of finding the nearest PD matrix, RCO yields similar condition numbers to near‐PD methods, but provides a better overall near‐null space.
{"title":"Numerical algorithm for estimating a conditioned symmetric positive definite matrix under constraints","authors":"Oren E. Livne, Katherine E. Castellano, Dan F. McCaffrey","doi":"10.1002/nla.2559","DOIUrl":"https://doi.org/10.1002/nla.2559","url":null,"abstract":"SummaryWe present RCO (regularized Cholesky optimization): a numerical algorithm for finding a symmetric positive definite (PD) matrix with a bounded condition number that minimizes an objective function. This task arises when estimating a covariance matrix from noisy data or due to model constraints, which can cause spurious small negative eigenvalues. A special case is the problem of finding the nearest well‐conditioned PD matrix to a given matrix. RCO explicitly optimizes the entries of the Cholesky factor. This requires solving a regularized non‐linear, non‐convex optimization problem, for which we apply Newton‐CG and exploit the Hessian's sparsity. The regularization parameter is determined via numerical continuation with an accuracy‐conditioning trade‐off criterion. We apply RCO to our motivating educational measurement application of estimating the covariance matrix of an empirical best linear prediction (EBLP) of school growth scores. We present numerical results for two empirical datasets, state and urban. RCO outperforms general‐purpose near‐PD algorithms, obtaining ‐smaller matrix reconstruction bias and smaller EBLP estimator mean‐squared error. It is in fact the only algorithm that solves the right minimization problem, which strikes a balance between the objective function and the condition number. RCO can be similarly applied to the stable estimation of other posterior means. For the task of finding the nearest PD matrix, RCO yields similar condition numbers to near‐PD methods, but provides a better overall near‐null space.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"81 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The convergence of preconditioned gradient methods for nonlinear underdetermined least squares problems arising in, for example, supervised learning of overparameterized neural networks is investigated. In this general setting, conditions are given that guarantee the existence of global minimizers that correspond to zero residuals and a proof of the convergence of a gradient method to these global minima is presented. In order to accelerate convergence of the gradient method, different preconditioning strategies are developed and analyzed. In particular, a left randomized preconditioner and a right coarse‐level correction preconditioner are combined and investigated. It is demonstrated that the resulting split preconditioned two‐level gradient method incorporates the advantages of both approaches and performs very efficiently.
{"title":"A split preconditioning scheme for nonlinear underdetermined least squares problems","authors":"Nadja Vater, Alfio Borzì","doi":"10.1002/nla.2558","DOIUrl":"https://doi.org/10.1002/nla.2558","url":null,"abstract":"The convergence of preconditioned gradient methods for nonlinear underdetermined least squares problems arising in, for example, supervised learning of overparameterized neural networks is investigated. In this general setting, conditions are given that guarantee the existence of global minimizers that correspond to zero residuals and a proof of the convergence of a gradient method to these global minima is presented. In order to accelerate convergence of the gradient method, different preconditioning strategies are developed and analyzed. In particular, a left randomized preconditioner and a right coarse‐level correction preconditioner are combined and investigated. It is demonstrated that the resulting split preconditioned two‐level gradient method incorporates the advantages of both approaches and performs very efficiently.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"18 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140624698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, for quickly solving one- and two-dimensional space-fractional sine-Gordon equations with distributed delay, we suggest several accelerated schemes of direct compact difference (DCD) methods. For one-dimensional (1D) problems, with a function transformation, we construct an indirect compact difference (ICD) method, which requires less calculation cost than the corresponding DCD method, and prove under the appropriate conditions that ICD method has second-order (resp. forth-order) calculation accuracy in time (resp. space). By extending the argument for 1D case, we further obtain an ICD method for solving two-dimensional (2D) problems and derive the similar convergence result. For ICD and DCD methods of 2D problems, we also give their alternative direction implicit (ADI) schemes. Moreover, for the fast implementations of ICD method of 1D problems and indirect ADI method of 2D problems, we further present their acceleration strategies. Finally, with a series of numerical experiments, the findings in this paper are further confirmed.
{"title":"Accelerated schemes of compact difference methods for space-fractional sine-Gordon equations with distributed delay","authors":"Tao Sun, Chengjian Zhang, Changyang Tang","doi":"10.1002/nla.2556","DOIUrl":"https://doi.org/10.1002/nla.2556","url":null,"abstract":"In this paper, for quickly solving one- and two-dimensional space-fractional sine-Gordon equations with distributed delay, we suggest several accelerated schemes of direct compact difference (DCD) methods. For one-dimensional (1D) problems, with a function transformation, we construct an indirect compact difference (ICD) method, which requires less calculation cost than the corresponding DCD method, and prove under the appropriate conditions that ICD method has second-order (resp. forth-order) calculation accuracy in time (resp. space). By extending the argument for 1D case, we further obtain an ICD method for solving two-dimensional (2D) problems and derive the similar convergence result. For ICD and DCD methods of 2D problems, we also give their alternative direction implicit (ADI) schemes. Moreover, for the fast implementations of ICD method of 1D problems and indirect ADI method of 2D problems, we further present their acceleration strategies. Finally, with a series of numerical experiments, the findings in this paper are further confirmed.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"15 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140565422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of large‐scale finite‐sum minimization on Riemannian manifold. We develop a sub‐sampled adaptive trust region method on Riemannian manifolds. Based on inexact information, we adopt adaptive techniques to flexibly adjust the trust region radius in our method. We present the iteration complexity is when the algorithm attains an ‐second‐order stationary point, which matches the result on trust region method. Numerical results for PCA on Grassmann manifold and low‐rank matrix completion are reported to demonstrate the effectiveness of the proposed Riemannian method.
{"title":"Sub‐sampled adaptive trust region method on Riemannian manifolds","authors":"Shimin Zhao, Tao Yan, Yuanguo Zhu","doi":"10.1002/nla.2557","DOIUrl":"https://doi.org/10.1002/nla.2557","url":null,"abstract":"We consider the problem of large‐scale finite‐sum minimization on Riemannian manifold. We develop a sub‐sampled adaptive trust region method on Riemannian manifolds. Based on inexact information, we adopt adaptive techniques to flexibly adjust the trust region radius in our method. We present the iteration complexity is when the algorithm attains an ‐second‐order stationary point, which matches the result on trust region method. Numerical results for PCA on Grassmann manifold and low‐rank matrix completion are reported to demonstrate the effectiveness of the proposed Riemannian method.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"5 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140565420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We derive an extension of the sequential homotopy method that allows for the application of inexact solvers for the linear (double) saddle-point systems arising in the local semismooth Newton method for the homotopy subproblems. For the class of problems that exhibit (after suitable partitioning of the variables) a zero in the off-diagonal blocks of the Hessian of the Lagrangian, we propose and analyze an efficient, parallelizable, symmetric positive definite preconditioner based on a double Schur complement approach. For discretized optimal control problems with PDE constraints, this structure is often present with the canonical partitioning of the variables in states and controls. We conclude with numerical results for a badly conditioned and highly nonlinear benchmark optimization problem with elliptic partial differential equations and control bounds. The resulting method allows for the parallel solution of large 3D problems.
{"title":"Double saddle-point preconditioning for Krylov methods in the inexact sequential homotopy method","authors":"John W. Pearson, Andreas Potschka","doi":"10.1002/nla.2553","DOIUrl":"https://doi.org/10.1002/nla.2553","url":null,"abstract":"We derive an extension of the sequential homotopy method that allows for the application of inexact solvers for the linear (double) saddle-point systems arising in the local semismooth Newton method for the homotopy subproblems. For the class of problems that exhibit (after suitable partitioning of the variables) a zero in the off-diagonal blocks of the Hessian of the Lagrangian, we propose and analyze an efficient, parallelizable, symmetric positive definite preconditioner based on a double Schur complement approach. For discretized optimal control problems with PDE constraints, this structure is often present with the canonical partitioning of the variables in states and controls. We conclude with numerical results for a badly conditioned and highly nonlinear benchmark optimization problem with elliptic partial differential equations and control bounds. The resulting method allows for the parallel solution of large 3D problems.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"15 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140565416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryLarge‐scale linear time‐invariant (LTI) dynamical systems are widely used to characterize complicated physical phenomena. Glover developed the Hankel norm approximation (HNA) algorithm for optimally reducing the system in the Hankel norm, and we study its numerical issues. We provide a remedy for the numerical instabilities of Glover's HNA algorithm caused by clustered singular values. We analyze the effect of our modification on the degree and the Hankel error of the reduced system. Moreover, we propose a two‐stage framework to reduce the order of a large‐scale LTI system given samples of its transfer function for a target degree of the reduced system. It combines the adaptive Antoulas–Anderson (AAA) algorithm, modified to produce an intermediate LTI system in a numerically stable way, and the modified HNA algorithm. A carefully computed rational approximation of an adaptively chosen degree gives us an algorithm for reducing an LTI system, which achieves a balance between speed and accuracy.
{"title":"Leveraging the Hankel norm approximation and data‐driven algorithms in reduced order modeling","authors":"Annan Yu, Alex Townsend","doi":"10.1002/nla.2555","DOIUrl":"https://doi.org/10.1002/nla.2555","url":null,"abstract":"SummaryLarge‐scale linear time‐invariant (LTI) dynamical systems are widely used to characterize complicated physical phenomena. Glover developed the Hankel norm approximation (HNA) algorithm for optimally reducing the system in the Hankel norm, and we study its numerical issues. We provide a remedy for the numerical instabilities of Glover's HNA algorithm caused by clustered singular values. We analyze the effect of our modification on the degree and the Hankel error of the reduced system. Moreover, we propose a two‐stage framework to reduce the order of a large‐scale LTI system given samples of its transfer function for a target degree of the reduced system. It combines the adaptive Antoulas–Anderson (AAA) algorithm, modified to produce an intermediate LTI system in a numerically stable way, and the modified HNA algorithm. A carefully computed rational approximation of an adaptively chosen degree gives us an algorithm for reducing an LTI system, which achieves a balance between speed and accuracy.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"2 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140205342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For solving a consistent system of linear equations, the classical row‐action method, such as Kaczmarz method, is a simple while really effective iteration solver. Based on the greedy index selection strategy and Polyak's heavy‐ball momentum acceleration technique, we propose two deterministic row‐action methods and establish the corresponding convergence theory. We show that our algorithm can linearly converge to a least‐squares solution with minimum Euclidean norm. Several numerical studies have been presented to corroborate our theoretical findings. Real‐world applications, such as data fitting in computer‐aided geometry design, are also presented for illustrative purposes.
对于求解一致线性方程组,经典的行作用法(如 Kaczmarz 法)是一种简单而有效的迭代求解方法。基于贪婪索引选择策略和 Polyak 的重球动量加速技术,我们提出了两种确定性行作用方法,并建立了相应的收敛理论。我们证明,我们的算法可以线性收敛到最小欧几里德规范的最小二乘解。一些数值研究证实了我们的理论发现。此外,我们还介绍了计算机辅助几何设计中的数据拟合等实际应用,以资说明。
{"title":"On the Polyak momentum variants of the greedy deterministic single and multiple row‐action methods","authors":"Nian‐Ci Wu, Qian Zuo, Yatian Wang","doi":"10.1002/nla.2552","DOIUrl":"https://doi.org/10.1002/nla.2552","url":null,"abstract":"For solving a consistent system of linear equations, the classical row‐action method, such as Kaczmarz method, is a simple while really effective iteration solver. Based on the greedy index selection strategy and Polyak's heavy‐ball momentum acceleration technique, we propose two deterministic row‐action methods and establish the corresponding convergence theory. We show that our algorithm can linearly converge to a least‐squares solution with minimum Euclidean norm. Several numerical studies have been presented to corroborate our theoretical findings. Real‐world applications, such as data fitting in computer‐aided geometry design, are also presented for illustrative purposes.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"47 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of polynomial least squares fitting in the standard Lagrange basis is addressed in this work. Although the matrices involved in the corresponding overdetermined linear systems are not totally positive, rectangular totally positive Lagrange-Vandermonde matrices are used to take advantage of total positivity in the construction of accurate algorithms to solve the considered problem. In particular, a fast and accurate algorithm to compute the bidiagonal decomposition of such rectangular totally positive matrices is crucial to solve the problem. This algorithm also allows the accurate computation of the Moore-Penrose inverse and the projection matrix of the collocation matrices involved in these problems. Numerical experiments showing the good behaviour of the proposed algorithms are included.
{"title":"Total positivity and least squares problems in the Lagrange basis","authors":"Ana Marco, José-Javier Martínez, Raquel Viaña","doi":"10.1002/nla.2554","DOIUrl":"https://doi.org/10.1002/nla.2554","url":null,"abstract":"The problem of polynomial least squares fitting in the standard Lagrange basis is addressed in this work. Although the matrices involved in the corresponding overdetermined linear systems are not totally positive, rectangular totally positive Lagrange-Vandermonde matrices are used to take advantage of total positivity in the construction of accurate algorithms to solve the considered problem. In particular, a fast and accurate algorithm to compute the bidiagonal decomposition of such rectangular totally positive matrices is crucial to solve the problem. This algorithm also allows the accurate computation of the Moore-Penrose inverse and the projection matrix of the collocation matrices involved in these problems. Numerical experiments showing the good behaviour of the proposed algorithms are included.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"21 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140074420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe and analyze the spectral properties of several exact block preconditioners for a class of double saddle point problems. Among all these, we consider an inexact version of a block triangular preconditioner providing extremely fast convergence of the (F)GMRES method. We develop a spectral analysis of the preconditioned matrix showing that the complex eigenvalues lie in a circle of center <mjx-container aria-label="left parenthesis 1 comma 0 right parenthesis" ctxtmenu_counter="0" ctxtmenu_oldtabindex="1" jax="CHTML" role="application" sre-explorer- style="font-size: 103%; position: relative;" tabindex="0"><mjx-math aria-hidden="true"><mjx-semantics><mjx-mrow data-semantic-children="5" data-semantic-content="0,4" data-semantic- data-semantic-role="leftright" data-semantic-speech="left parenthesis 1 comma 0 right parenthesis" data-semantic-type="fenced"><mjx-mo data-semantic- data-semantic-operator="fenced" data-semantic-parent="6" data-semantic-role="open" data-semantic-type="fence" style="margin-left: 0.056em; margin-right: 0.056em;"><mjx-c></mjx-c></mjx-mo><mjx-mrow data-semantic-children="1,2,3" data-semantic-content="2" data-semantic- data-semantic-parent="6" data-semantic-role="sequence" data-semantic-type="punctuated"><mjx-mn data-semantic-annotation="clearspeak:simple" data-semantic-font="normal" data-semantic- data-semantic-parent="5" data-semantic-role="integer" data-semantic-type="number"><mjx-c></mjx-c></mjx-mn><mjx-mo data-semantic- data-semantic-operator="punctuated" data-semantic-parent="5" data-semantic-role="comma" data-semantic-type="punctuation" rspace="3" style="margin-left: 0.056em;"><mjx-c></mjx-c></mjx-mo><mjx-mn data-semantic-annotation="clearspeak:simple" data-semantic-font="normal" data-semantic- data-semantic-parent="5" data-semantic-role="integer" data-semantic-type="number"><mjx-c></mjx-c></mjx-mn></mjx-mrow><mjx-mo data-semantic- data-semantic-operator="fenced" data-semantic-parent="6" data-semantic-role="close" data-semantic-type="fence" style="margin-left: 0.056em; margin-right: 0.056em;"><mjx-c></mjx-c></mjx-mo></mjx-mrow></mjx-semantics></mjx-math><mjx-assistive-mml aria-hidden="true" display="inline" unselectable="on"><math altimg="/cms/asset/3929c6dd-d320-4d1e-8a5e-27ca95fe5f88/nla2551-math-0001.png" xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow data-semantic-="" data-semantic-children="5" data-semantic-content="0,4" data-semantic-role="leftright" data-semantic-speech="left parenthesis 1 comma 0 right parenthesis" data-semantic-type="fenced"><mo data-semantic-="" data-semantic-operator="fenced" data-semantic-parent="6" data-semantic-role="open" data-semantic-type="fence" stretchy="false">(</mo><mrow data-semantic-="" data-semantic-children="1,2,3" data-semantic-content="2" data-semantic-parent="6" data-semantic-role="sequence" data-semantic-type="punctuated"><mn data-semantic-="" data-semantic-annotation="clearspeak:simple" data-semantic-font="normal" data-semantic-parent="5
{"title":"Some preconditioning techniques for a class of double saddle point problems","authors":"Fariba Balani Bakrani, Luca Bergamaschi, Ángeles Martínez, Masoud Hajarian","doi":"10.1002/nla.2551","DOIUrl":"https://doi.org/10.1002/nla.2551","url":null,"abstract":"In this paper, we describe and analyze the spectral properties of several exact block preconditioners for a class of double saddle point problems. Among all these, we consider an inexact version of a block triangular preconditioner providing extremely fast convergence of the (F)GMRES method. We develop a spectral analysis of the preconditioned matrix showing that the complex eigenvalues lie in a circle of center <mjx-container aria-label=\"left parenthesis 1 comma 0 right parenthesis\" ctxtmenu_counter=\"0\" ctxtmenu_oldtabindex=\"1\" jax=\"CHTML\" role=\"application\" sre-explorer- style=\"font-size: 103%; position: relative;\" tabindex=\"0\"><mjx-math aria-hidden=\"true\"><mjx-semantics><mjx-mrow data-semantic-children=\"5\" data-semantic-content=\"0,4\" data-semantic- data-semantic-role=\"leftright\" data-semantic-speech=\"left parenthesis 1 comma 0 right parenthesis\" data-semantic-type=\"fenced\"><mjx-mo data-semantic- data-semantic-operator=\"fenced\" data-semantic-parent=\"6\" data-semantic-role=\"open\" data-semantic-type=\"fence\" style=\"margin-left: 0.056em; margin-right: 0.056em;\"><mjx-c></mjx-c></mjx-mo><mjx-mrow data-semantic-children=\"1,2,3\" data-semantic-content=\"2\" data-semantic- data-semantic-parent=\"6\" data-semantic-role=\"sequence\" data-semantic-type=\"punctuated\"><mjx-mn data-semantic-annotation=\"clearspeak:simple\" data-semantic-font=\"normal\" data-semantic- data-semantic-parent=\"5\" data-semantic-role=\"integer\" data-semantic-type=\"number\"><mjx-c></mjx-c></mjx-mn><mjx-mo data-semantic- data-semantic-operator=\"punctuated\" data-semantic-parent=\"5\" data-semantic-role=\"comma\" data-semantic-type=\"punctuation\" rspace=\"3\" style=\"margin-left: 0.056em;\"><mjx-c></mjx-c></mjx-mo><mjx-mn data-semantic-annotation=\"clearspeak:simple\" data-semantic-font=\"normal\" data-semantic- data-semantic-parent=\"5\" data-semantic-role=\"integer\" data-semantic-type=\"number\"><mjx-c></mjx-c></mjx-mn></mjx-mrow><mjx-mo data-semantic- data-semantic-operator=\"fenced\" data-semantic-parent=\"6\" data-semantic-role=\"close\" data-semantic-type=\"fence\" style=\"margin-left: 0.056em; margin-right: 0.056em;\"><mjx-c></mjx-c></mjx-mo></mjx-mrow></mjx-semantics></mjx-math><mjx-assistive-mml aria-hidden=\"true\" display=\"inline\" unselectable=\"on\"><math altimg=\"/cms/asset/3929c6dd-d320-4d1e-8a5e-27ca95fe5f88/nla2551-math-0001.png\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow data-semantic-=\"\" data-semantic-children=\"5\" data-semantic-content=\"0,4\" data-semantic-role=\"leftright\" data-semantic-speech=\"left parenthesis 1 comma 0 right parenthesis\" data-semantic-type=\"fenced\"><mo data-semantic-=\"\" data-semantic-operator=\"fenced\" data-semantic-parent=\"6\" data-semantic-role=\"open\" data-semantic-type=\"fence\" stretchy=\"false\">(</mo><mrow data-semantic-=\"\" data-semantic-children=\"1,2,3\" data-semantic-content=\"2\" data-semantic-parent=\"6\" data-semantic-role=\"sequence\" data-semantic-type=\"punctuated\"><mn data-semantic-=\"\" data-semantic-annotation=\"clearspeak:simple\" data-semantic-font=\"normal\" data-semantic-parent=\"5","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"127 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryGramian matrices with respect to inner products defined for Hilbert spaces supported on bounded and unbounded intervals are represented through a bidiagonal factorization. It is proved that the considered matrices are strictly totally positive Hankel matrices and their catalecticant determinants are also calculated. Using the proposed representation, the numerical resolution of linear algebra problems with these matrices can be achieved to high relative accuracy. Numerical experiments are provided, and they illustrate the excellent results obtained when applying the theoretical results.
{"title":"Total positivity and high relative accuracy for several classes of Hankel matrices","authors":"E. Mainar, J.M. Peña, B. Rubio","doi":"10.1002/nla.2550","DOIUrl":"https://doi.org/10.1002/nla.2550","url":null,"abstract":"SummaryGramian matrices with respect to inner products defined for Hilbert spaces supported on bounded and unbounded intervals are represented through a bidiagonal factorization. It is proved that the considered matrices are strictly totally positive Hankel matrices and their catalecticant determinants are also calculated. Using the proposed representation, the numerical resolution of linear algebra problems with these matrices can be achieved to high relative accuracy. Numerical experiments are provided, and they illustrate the excellent results obtained when applying the theoretical results.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":"52 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}