Approximate separability of symmetrically penalized least squares in high dimensions: characterization and consequences

IF 1.4 4区 数学 Q2 MATHEMATICS, APPLIED Information and Inference-A Journal of the Ima Pub Date : 2021-02-01 DOI:10.1093/imaiai/iaaa037
Michael Celentano
{"title":"Approximate separability of symmetrically penalized least squares in high dimensions: characterization and consequences","authors":"Michael Celentano","doi":"10.1093/imaiai/iaaa037","DOIUrl":null,"url":null,"abstract":"We show that the high-dimensional behavior of symmetrically penalized least squares with a possibly non-separable, symmetric, convex penalty in both (i) the Gaussian sequence model and (ii) the linear model with uncorrelated Gaussian designs nearly agrees with the behavior of least squares with an appropriately chosen separable penalty in these same models. This agreement is established by finite-sample concentration inequalities which precisely characterize the behavior of symmetrically penalized least squares in both models via a comparison to a simple scalar statistical model. The concentration inequalities are novel in their precision and generality. Our results help clarify that the role non-separability can play in high-dimensional M-estimation. In particular, if the empirical distribution of the coordinates of the parameter is known—exactly or approximately—there are at most limited advantages to use non-separable, symmetric penalties over separable ones. In contrast, if the empirical distribution of the coordinates of the parameter is unknown, we argue that non-separable, symmetric penalties automatically implement an adaptive procedure, which we characterize. We also provide a partial converse which characterizes the adaptive procedures which can be implemented in this way.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 3","pages":"1105-1165"},"PeriodicalIF":1.4000,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa037","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Inference-A Journal of the Ima","FirstCategoryId":"100","ListUrlMain":"https://ieeexplore.ieee.org/document/9579228/","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 3

Abstract

We show that the high-dimensional behavior of symmetrically penalized least squares with a possibly non-separable, symmetric, convex penalty in both (i) the Gaussian sequence model and (ii) the linear model with uncorrelated Gaussian designs nearly agrees with the behavior of least squares with an appropriately chosen separable penalty in these same models. This agreement is established by finite-sample concentration inequalities which precisely characterize the behavior of symmetrically penalized least squares in both models via a comparison to a simple scalar statistical model. The concentration inequalities are novel in their precision and generality. Our results help clarify that the role non-separability can play in high-dimensional M-estimation. In particular, if the empirical distribution of the coordinates of the parameter is known—exactly or approximately—there are at most limited advantages to use non-separable, symmetric penalties over separable ones. In contrast, if the empirical distribution of the coordinates of the parameter is unknown, we argue that non-separable, symmetric penalties automatically implement an adaptive procedure, which we characterize. We also provide a partial converse which characterizes the adaptive procedures which can be implemented in this way.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
高维对称惩罚最小二乘的近似可分性:特征和结果
我们证明,在(i)高斯序列模型和(ii)具有不相关高斯设计的线性模型中,具有可能不可分离、对称、凸惩罚的对称惩罚最小二乘的高维行为与在这些相同模型中具有适当选择的可分离惩罚的最小二乘的行为几乎一致。这种一致性是由有限样本浓度不等式建立的,该不等式通过与简单标量统计模型的比较,精确地表征了两个模型中对称惩罚最小二乘的行为。集中不等式在精度和一般性方面都是新颖的。我们的结果有助于阐明不可分性在高维M-估计中的作用。特别是,如果参数坐标的经验分布是已知的——确切地或近似地——那么使用不可分离的对称惩罚相对于可分离惩罚的优势最多是有限的。相反,如果参数坐标的经验分布是未知的,我们认为不可分离的对称惩罚会自动实现自适应过程,我们对此进行了描述。我们还提供了一个部分逆,它表征了可以以这种方式实现的自适应过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.90
自引率
0.00%
发文量
28
期刊最新文献
The Dyson equalizer: adaptive noise stabilization for low-rank signal detection and recovery. Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise. Phase transition and higher order analysis of Lq regularization under dependence. On statistical inference with high-dimensional sparse CCA. Black-box tests for algorithmic stability.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1