Oren E. Livne, Katherine E. Castellano, Dan F. McCaffrey
{"title":"Numerical algorithm for estimating a conditioned symmetric positive definite matrix under constraints","authors":"Oren E. Livne, Katherine E. Castellano, Dan F. McCaffrey","doi":"10.1002/nla.2559","DOIUrl":null,"url":null,"abstract":"SummaryWe present RCO (regularized Cholesky optimization): a numerical algorithm for finding a symmetric positive definite (PD) matrix with a bounded condition number that minimizes an objective function. This task arises when estimating a covariance matrix from noisy data or due to model constraints, which can cause spurious small negative eigenvalues. A special case is the problem of finding the nearest well‐conditioned PD matrix to a given matrix. RCO explicitly optimizes the entries of the Cholesky factor. This requires solving a regularized non‐linear, non‐convex optimization problem, for which we apply Newton‐CG and exploit the Hessian's sparsity. The regularization parameter is determined via numerical continuation with an accuracy‐conditioning trade‐off criterion. We apply RCO to our motivating educational measurement application of estimating the covariance matrix of an empirical best linear prediction (EBLP) of school growth scores. We present numerical results for two empirical datasets, state and urban. RCO outperforms general‐purpose near‐PD algorithms, obtaining ‐smaller matrix reconstruction bias and smaller EBLP estimator mean‐squared error. It is in fact the only algorithm that solves the right minimization problem, which strikes a balance between the objective function and the condition number. RCO can be similarly applied to the stable estimation of other posterior means. For the task of finding the nearest PD matrix, RCO yields similar condition numbers to near‐PD methods, but provides a better overall near‐null space.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numerical Linear Algebra with Applications","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1002/nla.2559","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
SummaryWe present RCO (regularized Cholesky optimization): a numerical algorithm for finding a symmetric positive definite (PD) matrix with a bounded condition number that minimizes an objective function. This task arises when estimating a covariance matrix from noisy data or due to model constraints, which can cause spurious small negative eigenvalues. A special case is the problem of finding the nearest well‐conditioned PD matrix to a given matrix. RCO explicitly optimizes the entries of the Cholesky factor. This requires solving a regularized non‐linear, non‐convex optimization problem, for which we apply Newton‐CG and exploit the Hessian's sparsity. The regularization parameter is determined via numerical continuation with an accuracy‐conditioning trade‐off criterion. We apply RCO to our motivating educational measurement application of estimating the covariance matrix of an empirical best linear prediction (EBLP) of school growth scores. We present numerical results for two empirical datasets, state and urban. RCO outperforms general‐purpose near‐PD algorithms, obtaining ‐smaller matrix reconstruction bias and smaller EBLP estimator mean‐squared error. It is in fact the only algorithm that solves the right minimization problem, which strikes a balance between the objective function and the condition number. RCO can be similarly applied to the stable estimation of other posterior means. For the task of finding the nearest PD matrix, RCO yields similar condition numbers to near‐PD methods, but provides a better overall near‐null space.
期刊介绍:
Manuscripts submitted to Numerical Linear Algebra with Applications should include large-scale broad-interest applications in which challenging computational results are integral to the approach investigated and analysed. Manuscripts that, in the Editor’s view, do not satisfy these conditions will not be accepted for review.
Numerical Linear Algebra with Applications receives submissions in areas that address developing, analysing and applying linear algebra algorithms for solving problems arising in multilinear (tensor) algebra, in statistics, such as Markov Chains, as well as in deterministic and stochastic modelling of large-scale networks, algorithm development, performance analysis or related computational aspects.
Topics covered include: Standard and Generalized Conjugate Gradients, Multigrid and Other Iterative Methods; Preconditioning Methods; Direct Solution Methods; Numerical Methods for Eigenproblems; Newton-like Methods for Nonlinear Equations; Parallel and Vectorizable Algorithms in Numerical Linear Algebra; Application of Methods of Numerical Linear Algebra in Science, Engineering and Economics.