Shakhawat Hossain , S. Ejaz Ahmed , Kjell A. Doksum
{"title":"广义线性模型中的收缩、预试和惩罚估计","authors":"Shakhawat Hossain , S. Ejaz Ahmed , Kjell A. Doksum","doi":"10.1016/j.stamet.2014.11.003","DOIUrl":null,"url":null,"abstract":"<div><p><span>We consider estimation in generalized linear models<span> when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients<span> to a candidate linear subspace based on subject matter or prior knowledge, we investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (</span></span></span><span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>GLM, adaptive <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span><span><span>GLM, and SCAD) with respect to the unrestricted maximum likelihood estimator (MLE). The </span>asymptotic properties<span><span> of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the unrestricted MLE. A </span>Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the unrestricted MLE. A real data set analysis is also presented to compare the suggested methods.</span></span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"24 ","pages":"Pages 52-68"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.11.003","citationCount":"25","resultStr":"{\"title\":\"Shrinkage, pretest, and penalty estimators in generalized linear models\",\"authors\":\"Shakhawat Hossain , S. Ejaz Ahmed , Kjell A. Doksum\",\"doi\":\"10.1016/j.stamet.2014.11.003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>We consider estimation in generalized linear models<span> when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients<span> to a candidate linear subspace based on subject matter or prior knowledge, we investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (</span></span></span><span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>GLM, adaptive <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span><span><span>GLM, and SCAD) with respect to the unrestricted maximum likelihood estimator (MLE). The </span>asymptotic properties<span><span> of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the unrestricted MLE. A </span>Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the unrestricted MLE. A real data set analysis is also presented to compare the suggested methods.</span></span></p></div>\",\"PeriodicalId\":48877,\"journal\":{\"name\":\"Statistical Methodology\",\"volume\":\"24 \",\"pages\":\"Pages 52-68\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1016/j.stamet.2014.11.003\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Statistical Methodology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1572312714000896\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistical Methodology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1572312714000896","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q","JCRName":"Mathematics","Score":null,"Total":0}
Shrinkage, pretest, and penalty estimators in generalized linear models
We consider estimation in generalized linear models when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on subject matter or prior knowledge, we investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (GLM, adaptive GLM, and SCAD) with respect to the unrestricted maximum likelihood estimator (MLE). The asymptotic properties of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the unrestricted MLE. A Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the unrestricted MLE. A real data set analysis is also presented to compare the suggested methods.
期刊介绍:
Statistical Methodology aims to publish articles of high quality reflecting the varied facets of contemporary statistical theory as well as of significant applications. In addition to helping to stimulate research, the journal intends to bring about interactions among statisticians and scientists in other disciplines broadly interested in statistical methodology. The journal focuses on traditional areas such as statistical inference, multivariate analysis, design of experiments, sampling theory, regression analysis, re-sampling methods, time series, nonparametric statistics, etc., and also gives special emphasis to established as well as emerging applied areas.