{"title":"无偏差收缩估计量的非极小性","authors":"Yuzo Maruyama, Akimichi Takemura","doi":"10.1007/s42081-023-00218-x","DOIUrl":null,"url":null,"abstract":"Abstract We consider the estimation of the p -variate normal mean of $$X\\sim \\mathcal {N}_p(\\theta ,I)$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mi>X</mml:mi> <mml:mo>∼</mml:mo> <mml:msub> <mml:mi>N</mml:mi> <mml:mi>p</mml:mi> </mml:msub> <mml:mrow> <mml:mo>(</mml:mo> <mml:mi>θ</mml:mi> <mml:mo>,</mml:mo> <mml:mi>I</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> under the quadratic loss function. We investigate the decision theoretic properties of debiased shrinkage estimator, the estimator which shrinks towards the origin for smaller $$\\Vert x\\Vert ^2$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msup> <mml:mrow> <mml:mo>‖</mml:mo> <mml:mi>x</mml:mi> <mml:mo>‖</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> </mml:math> and which is exactly equal to the unbiased estimator X for larger $$\\Vert x\\Vert ^2$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msup> <mml:mrow> <mml:mo>‖</mml:mo> <mml:mi>x</mml:mi> <mml:mo>‖</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> </mml:math> . Such debiased shrinkage estimator seems superior to the unbiased estimator X , which implies minimaxity. However, we show that it is not minimax under mild conditions.","PeriodicalId":29911,"journal":{"name":"Japanese Journal of Statistics and Data Science","volume":"14 1","pages":"0"},"PeriodicalIF":1.1000,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Non-minimaxity of debiased shrinkage estimators\",\"authors\":\"Yuzo Maruyama, Akimichi Takemura\",\"doi\":\"10.1007/s42081-023-00218-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract We consider the estimation of the p -variate normal mean of $$X\\\\sim \\\\mathcal {N}_p(\\\\theta ,I)$$ <mml:math xmlns:mml=\\\"http://www.w3.org/1998/Math/MathML\\\"> <mml:mrow> <mml:mi>X</mml:mi> <mml:mo>∼</mml:mo> <mml:msub> <mml:mi>N</mml:mi> <mml:mi>p</mml:mi> </mml:msub> <mml:mrow> <mml:mo>(</mml:mo> <mml:mi>θ</mml:mi> <mml:mo>,</mml:mo> <mml:mi>I</mml:mi> <mml:mo>)</mml:mo> </mml:mrow> </mml:mrow> </mml:math> under the quadratic loss function. We investigate the decision theoretic properties of debiased shrinkage estimator, the estimator which shrinks towards the origin for smaller $$\\\\Vert x\\\\Vert ^2$$ <mml:math xmlns:mml=\\\"http://www.w3.org/1998/Math/MathML\\\"> <mml:msup> <mml:mrow> <mml:mo>‖</mml:mo> <mml:mi>x</mml:mi> <mml:mo>‖</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> </mml:math> and which is exactly equal to the unbiased estimator X for larger $$\\\\Vert x\\\\Vert ^2$$ <mml:math xmlns:mml=\\\"http://www.w3.org/1998/Math/MathML\\\"> <mml:msup> <mml:mrow> <mml:mo>‖</mml:mo> <mml:mi>x</mml:mi> <mml:mo>‖</mml:mo> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> </mml:math> . Such debiased shrinkage estimator seems superior to the unbiased estimator X , which implies minimaxity. However, we show that it is not minimax under mild conditions.\",\"PeriodicalId\":29911,\"journal\":{\"name\":\"Japanese Journal of Statistics and Data Science\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2023-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Japanese Journal of Statistics and Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s42081-023-00218-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"STATISTICS & PROBABILITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Japanese Journal of Statistics and Data Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s42081-023-00218-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 0
摘要
摘要考虑了二阶损失函数下$$X\sim \mathcal {N}_p(\theta ,I)$$ X ~ N p (θ, I)的p变量正态均值的估计。我们研究了去偏收缩估计量的决策理论性质,对于较小的$$\Vert x\Vert ^2$$‖x‖2,估计量向原点收缩,对于较大的$$\Vert x\Vert ^2$$‖x‖2,估计量完全等于无偏估计量x。这种去偏收缩估计量似乎优于无偏估计量X,无偏估计量意味着极小。然而,我们证明在温和条件下它不是极小极大的。
Abstract We consider the estimation of the p -variate normal mean of $$X\sim \mathcal {N}_p(\theta ,I)$$ X∼Np(θ,I) under the quadratic loss function. We investigate the decision theoretic properties of debiased shrinkage estimator, the estimator which shrinks towards the origin for smaller $$\Vert x\Vert ^2$$ ‖x‖2 and which is exactly equal to the unbiased estimator X for larger $$\Vert x\Vert ^2$$ ‖x‖2 . Such debiased shrinkage estimator seems superior to the unbiased estimator X , which implies minimaxity. However, we show that it is not minimax under mild conditions.