{"title":"On prediction of moving-average processes","authors":"L. Shepp, D. Slepian, A. Wyner","doi":"10.1017/S0001867800050163","DOIUrl":null,"url":null,"abstract":"Let {X<inf>n</inf>} be a discrete-time stationary moving-average process having the representation where the real-valued process (Y<inf>n</inf>) has a well-defined entropy and spectrum. Let ∊<sup>∗2</sup><inf>k</inf> denote the smallest mean-squared error of any estimate of X<inf>n</inf> based on observations of X<inf>n–1</inf>, X<inf>n–2</inf>, …, X<inf>n–k</inf>, and let ∊<sup>∗2</sup><inf>klin</inf>, be the corresponding least mean-squared error when the estimator is linear in the k observations. We establish an inequality of the form where G(Y) ≤ 1 depends only on the entropy and spectrum of {Y<inf>n</inf>}. We also obtain explicit formulas for ∊<sup>∗2</sup><inf>k</inf> and ∊<sup>∗2</sup><inf>klin</inf> and compare these quantities graphically when M = 2 and the {Y<inf>n</inf>} are i.i.d. variates with one of several different distributions. The best estimators are quite complicated but are frequently considerably better than the best linear ones. This extends a result of M. Kanter.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1980-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Bell System Technical Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/S0001867800050163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27
Abstract
Let {Xn} be a discrete-time stationary moving-average process having the representation where the real-valued process (Yn) has a well-defined entropy and spectrum. Let ∊∗2k denote the smallest mean-squared error of any estimate of Xn based on observations of Xn–1, Xn–2, …, Xn–k, and let ∊∗2klin, be the corresponding least mean-squared error when the estimator is linear in the k observations. We establish an inequality of the form where G(Y) ≤ 1 depends only on the entropy and spectrum of {Yn}. We also obtain explicit formulas for ∊∗2k and ∊∗2klin and compare these quantities graphically when M = 2 and the {Yn} are i.i.d. variates with one of several different distributions. The best estimators are quite complicated but are frequently considerably better than the best linear ones. This extends a result of M. Kanter.