以接近oracle的速度进行分布式降维

Zhengtian Zhu, Liping Zhu
{"title":"以接近oracle的速度进行分布式降维","authors":"Zhengtian Zhu, Liping Zhu","doi":"10.1002/sam.11592","DOIUrl":null,"url":null,"abstract":"We consider sufficient dimension reduction for heterogeneous massive data. We show that, even in the presence of heterogeneity and nonlinear dependence, the minimizers of convex loss functions of linear regression fall into the central subspace at the population level. We suggest a distributed algorithm to perform sufficient dimension reduction, where the convex loss functions are approximated with surrogate quadratic losses. This allows to perform dimension reduction in a unified least squares framework and facilitates to transmit the gradients in our distributed algorithm. The minimizers of these surrogate quadratic losses possess a nearly oracle rate after a finite number of iterations. We conduct simulations and an application to demonstrate the effectiveness of our proposed distributed algorithm for heterogeneous massive data.","PeriodicalId":342679,"journal":{"name":"Statistical Analysis and Data Mining: The ASA Data Science Journal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Distributed dimension reduction with nearly oracle rate\",\"authors\":\"Zhengtian Zhu, Liping Zhu\",\"doi\":\"10.1002/sam.11592\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider sufficient dimension reduction for heterogeneous massive data. We show that, even in the presence of heterogeneity and nonlinear dependence, the minimizers of convex loss functions of linear regression fall into the central subspace at the population level. We suggest a distributed algorithm to perform sufficient dimension reduction, where the convex loss functions are approximated with surrogate quadratic losses. This allows to perform dimension reduction in a unified least squares framework and facilitates to transmit the gradients in our distributed algorithm. The minimizers of these surrogate quadratic losses possess a nearly oracle rate after a finite number of iterations. We conduct simulations and an application to demonstrate the effectiveness of our proposed distributed algorithm for heterogeneous massive data.\",\"PeriodicalId\":342679,\"journal\":{\"name\":\"Statistical Analysis and Data Mining: The ASA Data Science Journal\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Statistical Analysis and Data Mining: The ASA Data Science Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/sam.11592\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistical Analysis and Data Mining: The ASA Data Science Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/sam.11592","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

我们考虑了异构海量数据的充分降维。我们证明,即使在异质性和非线性依赖存在的情况下,线性回归的凸损失函数的极小值落在总体水平的中心子空间。我们建议一种分布式算法来执行足够的降维,其中凸损失函数用代理二次损失近似。这允许在统一的最小二乘框架中执行降维,并且便于在我们的分布式算法中传输梯度。在有限次迭代之后,这些代理二次损失的最小值具有接近oracle的速率。我们进行了仿真和应用,以证明我们提出的分布式算法对异构海量数据的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Distributed dimension reduction with nearly oracle rate
We consider sufficient dimension reduction for heterogeneous massive data. We show that, even in the presence of heterogeneity and nonlinear dependence, the minimizers of convex loss functions of linear regression fall into the central subspace at the population level. We suggest a distributed algorithm to perform sufficient dimension reduction, where the convex loss functions are approximated with surrogate quadratic losses. This allows to perform dimension reduction in a unified least squares framework and facilitates to transmit the gradients in our distributed algorithm. The minimizers of these surrogate quadratic losses possess a nearly oracle rate after a finite number of iterations. We conduct simulations and an application to demonstrate the effectiveness of our proposed distributed algorithm for heterogeneous massive data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Neural interval‐censored survival regression with feature selection Bayesian batch optimization for molybdenum versus tungsten inertial confinement fusion double shell target design Gaussian process selections in semiparametric multi‐kernel machine regression for multi‐pathway analysis An automated alignment algorithm for identification of the source of footwear impressions with common class characteristics Confidence bounds for threshold similarity graph in random variable network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1