The Conjugate Gradients Method Applied to Problems in Linear Algebra

J. Nash
{"title":"The Conjugate Gradients Method Applied to Problems in Linear Algebra","authors":"J. Nash","doi":"10.1201/9781315139784-19","DOIUrl":null,"url":null,"abstract":"This monograph concludes by applying the conjugate gradients method, developed in chapter 16 for the minimisation of nonlinear functions, to linear equations, linear least-squares and algebraic eigenvalue problems. The methods suggested may not be the most efficient or effective of their type, since this subject area has not attracted a great deal of careful research. In fact much of the work which has been performed on the sparse algebraic eigenvalue problem has been carried out by those scientists and engineers in search of solutions. Stewart (1976) has prepared an extensive bibliography on the large, sparse, generalised symmetric matrix eigenvalue problem in which it is unfortunately difficult to find many reports that do more than describe a method. Thorough or even perfunctory testing is often omitted and convergence is rarely demonstrated, let alone proved. The work of Professor Axe1 Ruhe and his co-workers at Umea is a notable exception to this generality. Partly, the lack of testing is due to the sheer size of the matrices that may be involved in real problems and the cost of finding eigensolutions. The linear equations and least-squares problems have enjoyed a more diligent study. A number of studies have been made of the conjugate gradients method for linear-equation systems with positive definite coefficient matrices, of which one is that of Reid (1971). Related methods have been developed in particular by Paige and Saunders (1975) who relate the conjugate gradients methods to the Lanczos algorithm for the algebraic eigenproblem. The Lanczos algorithm has been left out of this work because I feel it to be a tool best used by someone prepared to tolerate its quirks. This sentiment accords with the statement of Kahan and Parlett (1976):‘The urge to write a universal Lanczos program should be resisted, at least until the process is better understood.’ However, in the hands of an expert, it is a very powerful method for finding the eigenvalues of a large symmetric matrix. For indefinite coefficient matrices, however, I would expect the Paige-Saunders method to be preferred, by virtue of its design. In preparing the first edition of this book, I experimented briefly with some FORTRAN codes for several methods for iterative solution of linear equations and least-squares problems, finding no clear advantage for any one approach, though I did not focus on indefinite matrices. Therefore, the treatment which follows will stay with conjugate gradients, which has the advantage of introducing no fundamentally new ideas. It must be pointed out that the general-purpose minimisation algorithm 22 does not perform very well on linear least-squares or Rayleigh quotient minimisations.","PeriodicalId":345605,"journal":{"name":"Compact Numerical Methods for Computers","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Compact Numerical Methods for Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9781315139784-19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This monograph concludes by applying the conjugate gradients method, developed in chapter 16 for the minimisation of nonlinear functions, to linear equations, linear least-squares and algebraic eigenvalue problems. The methods suggested may not be the most efficient or effective of their type, since this subject area has not attracted a great deal of careful research. In fact much of the work which has been performed on the sparse algebraic eigenvalue problem has been carried out by those scientists and engineers in search of solutions. Stewart (1976) has prepared an extensive bibliography on the large, sparse, generalised symmetric matrix eigenvalue problem in which it is unfortunately difficult to find many reports that do more than describe a method. Thorough or even perfunctory testing is often omitted and convergence is rarely demonstrated, let alone proved. The work of Professor Axe1 Ruhe and his co-workers at Umea is a notable exception to this generality. Partly, the lack of testing is due to the sheer size of the matrices that may be involved in real problems and the cost of finding eigensolutions. The linear equations and least-squares problems have enjoyed a more diligent study. A number of studies have been made of the conjugate gradients method for linear-equation systems with positive definite coefficient matrices, of which one is that of Reid (1971). Related methods have been developed in particular by Paige and Saunders (1975) who relate the conjugate gradients methods to the Lanczos algorithm for the algebraic eigenproblem. The Lanczos algorithm has been left out of this work because I feel it to be a tool best used by someone prepared to tolerate its quirks. This sentiment accords with the statement of Kahan and Parlett (1976):‘The urge to write a universal Lanczos program should be resisted, at least until the process is better understood.’ However, in the hands of an expert, it is a very powerful method for finding the eigenvalues of a large symmetric matrix. For indefinite coefficient matrices, however, I would expect the Paige-Saunders method to be preferred, by virtue of its design. In preparing the first edition of this book, I experimented briefly with some FORTRAN codes for several methods for iterative solution of linear equations and least-squares problems, finding no clear advantage for any one approach, though I did not focus on indefinite matrices. Therefore, the treatment which follows will stay with conjugate gradients, which has the advantage of introducing no fundamentally new ideas. It must be pointed out that the general-purpose minimisation algorithm 22 does not perform very well on linear least-squares or Rayleigh quotient minimisations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
共轭梯度法在线性代数问题中的应用
本专著通过将共轭梯度法应用于线性方程、线性最小二乘和代数特征值问题,该方法在第16章中用于非线性函数的最小化。所建议的方法可能不是其类型中最有效或最有效的方法,因为这一主题领域尚未引起大量仔细的研究。事实上,在稀疏代数特征值问题上所做的大部分工作都是由那些寻找解的科学家和工程师完成的。Stewart(1976)准备了一份关于大型、稀疏、广义对称矩阵特征值问题的广泛参考书目,不幸的是,很难找到许多除了描述一种方法之外做更多工作的报告。彻底甚至敷衍的测试经常被忽略,收敛性很少被证明,更不用说证明了。于默奥大学(Umea)的Axe1 Ruhe教授及其同事的研究是这种普遍现象的一个显著例外。在一定程度上,缺乏测试是由于可能涉及实际问题的矩阵的绝对大小以及寻找特征解的成本。线性方程和最小二乘问题得到了更深入的研究。对于具有正定系数矩阵的线性方程组的共轭梯度法已经做了许多研究,Reid(1971)的共轭梯度法就是其中之一。特别是Paige和Saunders(1975)发展了相关的方法,他们将共轭梯度方法与求解代数特征问题的Lanczos算法联系起来。Lanczos算法被排除在本文之外,因为我觉得它是一个最好的工具,只有那些准备忍受它的怪癖的人才能使用。这种观点与Kahan和Parlett(1976)的观点一致:“应该抵制编写一个通用的Lanczos程序的冲动,至少在这个过程得到更好的理解之前。”然而,在一个专家的手中,它是一个非常强大的方法来寻找一个大的对称矩阵的特征值。然而,对于不定系数矩阵,我希望Paige-Saunders方法更受青睐,因为它的设计。在准备本书第一版的过程中,我对几种线性方程和最小二乘问题的迭代解的方法进行了一些FORTRAN代码的简短实验,发现任何一种方法都没有明显的优势,尽管我没有把重点放在不定矩阵上。因此,接下来的处理将继续使用共轭梯度,它的优点是没有引入根本的新思想。必须指出的是,通用最小化算法22在线性最小二乘或瑞利商最小化上的表现不是很好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Starting Point One-Dimensional Problems The Conjugate Gradients Method Applied to Problems in Linear Algebra Some Comments on the Formation of the Cross-Products Matrix A T A The Generalised Symmetric Matrix Eigenvalue Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1