{"title":"求解厄米特征值问题的新牛顿算法","authors":"M. Nikpour, J. Manton, R. Mahony","doi":"10.1109/IDC.2002.995439","DOIUrl":null,"url":null,"abstract":"We present three related algorithms for iteratively computing all the eigenvectors of a Hermitian matrix. The algorithms are based on the idea of applying Newton updates to individual eigenvectors at each iteration. The advantage of these Newton updates is that they have a cubic rate of convergence. The difference between the algorithms is how they prevent the individual updates from converging to the same eigenvector. The first algorithm finds the eigenvectors sequentially, and uses a novel form of deflation in order to ensure all the eigenvectors are found. Rather than modify the matrix directly, which introduces large errors if the matrix is ill-conditioned, deflation is achieved by restricting the Newton updates to lie in a subspace orthogonal to all previously found eigenvectors. The other algorithms estimate all the eigenvectors at once. At each iteration, they sweep through all the estimates, performing a Newton update on each estimate once per sweep. Orthogonality is maintained by explicit re-orthogonalisation after each update, which also serves to improve the asymptotic rate of convergence of the algorithms.","PeriodicalId":385351,"journal":{"name":"Final Program and Abstracts on Information, Decision and Control","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Novel Newton algorithms for the Hermitian eigenvalue problem\",\"authors\":\"M. Nikpour, J. Manton, R. Mahony\",\"doi\":\"10.1109/IDC.2002.995439\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present three related algorithms for iteratively computing all the eigenvectors of a Hermitian matrix. The algorithms are based on the idea of applying Newton updates to individual eigenvectors at each iteration. The advantage of these Newton updates is that they have a cubic rate of convergence. The difference between the algorithms is how they prevent the individual updates from converging to the same eigenvector. The first algorithm finds the eigenvectors sequentially, and uses a novel form of deflation in order to ensure all the eigenvectors are found. Rather than modify the matrix directly, which introduces large errors if the matrix is ill-conditioned, deflation is achieved by restricting the Newton updates to lie in a subspace orthogonal to all previously found eigenvectors. The other algorithms estimate all the eigenvectors at once. At each iteration, they sweep through all the estimates, performing a Newton update on each estimate once per sweep. Orthogonality is maintained by explicit re-orthogonalisation after each update, which also serves to improve the asymptotic rate of convergence of the algorithms.\",\"PeriodicalId\":385351,\"journal\":{\"name\":\"Final Program and Abstracts on Information, Decision and Control\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Final Program and Abstracts on Information, Decision and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IDC.2002.995439\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Final Program and Abstracts on Information, Decision and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IDC.2002.995439","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Novel Newton algorithms for the Hermitian eigenvalue problem
We present three related algorithms for iteratively computing all the eigenvectors of a Hermitian matrix. The algorithms are based on the idea of applying Newton updates to individual eigenvectors at each iteration. The advantage of these Newton updates is that they have a cubic rate of convergence. The difference between the algorithms is how they prevent the individual updates from converging to the same eigenvector. The first algorithm finds the eigenvectors sequentially, and uses a novel form of deflation in order to ensure all the eigenvectors are found. Rather than modify the matrix directly, which introduces large errors if the matrix is ill-conditioned, deflation is achieved by restricting the Newton updates to lie in a subspace orthogonal to all previously found eigenvectors. The other algorithms estimate all the eigenvectors at once. At each iteration, they sweep through all the estimates, performing a Newton update on each estimate once per sweep. Orthogonality is maintained by explicit re-orthogonalisation after each update, which also serves to improve the asymptotic rate of convergence of the algorithms.