{"title":"遗忘因子RLS在瞬态阶段的性能","authors":"G. Moustakides","doi":"10.1109/DSPWS.1996.555538","DOIUrl":null,"url":null,"abstract":"The recursive least squares (RLS) algorithm is one of the most well known algorithms used for adaptive filtering and system identification. We consider the convergence properties of the forgetting factor RLS algorithm in a stationary data environment. We study the dependence of the speed of convergence of RLS with respect to the initialization of the input sample covariance matrix and with respect to the observation noise level. By obtaining estimates of the settling time we show that RLS, in a high SNR environment, when initialized with a matrix of small norm, has a very fast convergence. The convergence speed decreases as we increase the norm of the initialization matrix. In a medium SNR environment the optimum convergence speed of the algorithm is reduced, but the RLS becomes more insensitive to initialization. Finally in a low SNR environment it is preferable to start the algorithm with a matrix of large norm.","PeriodicalId":131323,"journal":{"name":"1996 IEEE Digital Signal Processing Workshop Proceedings","volume":"123 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Performance of the forgetting factor RLS during the transient phase\",\"authors\":\"G. Moustakides\",\"doi\":\"10.1109/DSPWS.1996.555538\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The recursive least squares (RLS) algorithm is one of the most well known algorithms used for adaptive filtering and system identification. We consider the convergence properties of the forgetting factor RLS algorithm in a stationary data environment. We study the dependence of the speed of convergence of RLS with respect to the initialization of the input sample covariance matrix and with respect to the observation noise level. By obtaining estimates of the settling time we show that RLS, in a high SNR environment, when initialized with a matrix of small norm, has a very fast convergence. The convergence speed decreases as we increase the norm of the initialization matrix. In a medium SNR environment the optimum convergence speed of the algorithm is reduced, but the RLS becomes more insensitive to initialization. Finally in a low SNR environment it is preferable to start the algorithm with a matrix of large norm.\",\"PeriodicalId\":131323,\"journal\":{\"name\":\"1996 IEEE Digital Signal Processing Workshop Proceedings\",\"volume\":\"123 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"1996 IEEE Digital Signal Processing Workshop Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSPWS.1996.555538\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"1996 IEEE Digital Signal Processing Workshop Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSPWS.1996.555538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance of the forgetting factor RLS during the transient phase
The recursive least squares (RLS) algorithm is one of the most well known algorithms used for adaptive filtering and system identification. We consider the convergence properties of the forgetting factor RLS algorithm in a stationary data environment. We study the dependence of the speed of convergence of RLS with respect to the initialization of the input sample covariance matrix and with respect to the observation noise level. By obtaining estimates of the settling time we show that RLS, in a high SNR environment, when initialized with a matrix of small norm, has a very fast convergence. The convergence speed decreases as we increase the norm of the initialization matrix. In a medium SNR environment the optimum convergence speed of the algorithm is reduced, but the RLS becomes more insensitive to initialization. Finally in a low SNR environment it is preferable to start the algorithm with a matrix of large norm.