{"title":"Model Selection Based On Asymptotic Bayes Theory","authors":"P. Djurić","doi":"10.1109/SSAP.1994.572419","DOIUrl":null,"url":null,"abstract":"The two most popular model selection rules in the signal processing literature are the Akaike’s criterion AIC and the Rissanen’s principle of minimum description length (MDL). These rules are similar in form in that they both consist of data and penalty terms. Their data terms are identical, while the penalties are different, the MDL being more stringent towards overparameterization. The two rules, however, penalize for each additional model parameter with an equal incremental amount of penalty, regardless of the parame ter’s role in the model. In this paper we attempt to show that this should not be the case. We derive an asymptotical maximum a posteriori (MAP) rule with more accurate penalties and provide simulation results that show improved performance of the so derived rule over the AIC and MDL.","PeriodicalId":151571,"journal":{"name":"IEEE Seventh SP Workshop on Statistical Signal and Array Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1994-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Seventh SP Workshop on Statistical Signal and Array Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSAP.1994.572419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19
Abstract
The two most popular model selection rules in the signal processing literature are the Akaike’s criterion AIC and the Rissanen’s principle of minimum description length (MDL). These rules are similar in form in that they both consist of data and penalty terms. Their data terms are identical, while the penalties are different, the MDL being more stringent towards overparameterization. The two rules, however, penalize for each additional model parameter with an equal incremental amount of penalty, regardless of the parame ter’s role in the model. In this paper we attempt to show that this should not be the case. We derive an asymptotical maximum a posteriori (MAP) rule with more accurate penalties and provide simulation results that show improved performance of the so derived rule over the AIC and MDL.