Cluster-seeking shrinkage estimators

K. Srinath, R. Venkataramanan
{"title":"Cluster-seeking shrinkage estimators","authors":"K. Srinath, R. Venkataramanan","doi":"10.1109/ISIT.2016.7541418","DOIUrl":null,"url":null,"abstract":"This paper considers the problem of estimating a high-dimensional vector θ ∈ ℝn from a noisy one-time observation. The noise vector is assumed to be i.i.d. Gaussian with known variance. For the squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JS-estimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for θ that lie close to the origin. JS-estimators can be generalized to shrink the data towards any target subspace. Such estimators also dominate the ML-estimator, but the risk reduction is significant only when θ lies close to the subspace. This leads to the question: in the absence of prior information about θ, how do we design estimators that give significant risk reduction over the ML-estimator for a wide range of θ? In this paper, we attempt to infer the structure of θ from the observed data in order to construct a good attracting subspace for the shrinkage estimator. We provide concentration results for the squared-error loss and convergence results for the risk of the proposed estimators, as well as simulation results to support the claims. The estimators give significant risk reduction over the ML-estimator for a wide range of θ, particularly for large n.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.2016.7541418","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This paper considers the problem of estimating a high-dimensional vector θ ∈ ℝn from a noisy one-time observation. The noise vector is assumed to be i.i.d. Gaussian with known variance. For the squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JS-estimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for θ that lie close to the origin. JS-estimators can be generalized to shrink the data towards any target subspace. Such estimators also dominate the ML-estimator, but the risk reduction is significant only when θ lies close to the subspace. This leads to the question: in the absence of prior information about θ, how do we design estimators that give significant risk reduction over the ML-estimator for a wide range of θ? In this paper, we attempt to infer the structure of θ from the observed data in order to construct a good attracting subspace for the shrinkage estimator. We provide concentration results for the squared-error loss and convergence results for the risk of the proposed estimators, as well as simulation results to support the claims. The estimators give significant risk reduction over the ML-estimator for a wide range of θ, particularly for large n.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
聚类搜索收缩估计器
研究了从有噪声的一次性观测中估计高维向量θ∈∈n的问题。假设噪声向量为已知方差的i.i.d高斯向量。对于平方误差损失函数,已知当维数n超过2时,James-Stein (JS)估计量支配简单最大似然(ML)估计量。js估计器将观测到的向量向原点缩小,而对于靠近原点的θ,与ml估计器相比,风险降低最大。js估计器可以推广到将数据压缩到任何目标子空间。这样的估计量也支配着ml估计量,但是只有当θ靠近子空间时,风险降低才有意义。这就引出了一个问题:在没有关于θ的先验信息的情况下,我们如何设计估计器,使其在大范围的θ范围内比ml估计器显著降低风险?本文试图从观测数据中推断出θ的结构,以便为收缩估计量构造一个良好的吸引子空间。我们提供了平方误差损失的集中结果和建议估计器风险的收敛结果,以及支持索赔的模拟结果。对于较大的θ范围,特别是对于较大的n,估计器比ml估计器提供了显著的风险降低。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
String concatenation construction for Chebyshev permutation channel codes Cyclically symmetric entropy inequalities Near-capacity protograph doubly-generalized LDPC codes with block thresholds On the capacity of a class of dual-band interference channels Distributed detection over connected networks via one-bit quantizer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1