{"title":"LMS is H/sup /spl infin// optimal","authors":"B. Hassibi, A.H. Sayed, T. Kailath","doi":"10.1109/CDC.1993.325187","DOIUrl":null,"url":null,"abstract":"Shows that the celebrated LMS (least-mean squares) adaptive algorithm is an H/sup /spl infin// optimal filter. In other words, the LMS algorithm, which has long been regarded as an approximate least-mean squares solution, is in fact a minimizer of the H/sup /spl infin// error norm. In particular, the LMS minimizes the energy gain from the disturbances to the predicted errors, while the normalized LMS minimizes the energy gain from the disturbances to the filtered errors. Moreover, since these algorithms are central H/sup /spl infin// filters, they are also risk-sensitive optimal and minimize a certain exponential cost function. The authors discuss various implications of these results, and show how they provide theoretical justification for the widely observed excellent robustness properties of the LMS filter.<<ETX>>","PeriodicalId":147749,"journal":{"name":"Proceedings of 32nd IEEE Conference on Decision and Control","volume":"341 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 32nd IEEE Conference on Decision and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.1993.325187","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23
Abstract
Shows that the celebrated LMS (least-mean squares) adaptive algorithm is an H/sup /spl infin// optimal filter. In other words, the LMS algorithm, which has long been regarded as an approximate least-mean squares solution, is in fact a minimizer of the H/sup /spl infin// error norm. In particular, the LMS minimizes the energy gain from the disturbances to the predicted errors, while the normalized LMS minimizes the energy gain from the disturbances to the filtered errors. Moreover, since these algorithms are central H/sup /spl infin// filters, they are also risk-sensitive optimal and minimize a certain exponential cost function. The authors discuss various implications of these results, and show how they provide theoretical justification for the widely observed excellent robustness properties of the LMS filter.<>