Sparse Generalized Principal Component Analysis for Large-scale Applications beyond Gaussianity

Qiaoya Zhang, Yiyuan She
{"title":"Sparse Generalized Principal Component Analysis for Large-scale Applications beyond Gaussianity","authors":"Qiaoya Zhang, Yiyuan She","doi":"10.4310/SII.2016.V9.N4.A11","DOIUrl":null,"url":null,"abstract":"Principal Component Analysis (PCA) is a dimension reduction technique. It produces inconsistent estimators when the dimensionality is moderate to high, which is often the problem in modern large-scale applications where algorithm scalability and model interpretability are difficult to achieve, not to mention the prevalence of missing values. While existing sparse PCA methods alleviate inconsistency, they are constrained to the Gaussian assumption of classical PCA and fail to address algorithm scalability issues. We generalize sparse PCA to the broad exponential family distributions under high-dimensional setup, with built-in treatment for missing values. Meanwhile we propose a family of iterative sparse generalized PCA (SG-PCA) algorithms such that despite the non-convexity and non-smoothness of the optimization task, the loss function decreases in every iteration. In terms of ease and intuitive parameter tuning, our sparsity-inducing regularization is far superior to the popular Lasso. Furthermore, to promote overall scalability, accelerated gradient is integrated for fast convergence, while a progressive screening technique gradually squeezes out nuisance dimensions of a large-scale problem for feasible optimization. High-dimensional simulation and real data experiments demonstrate the efficiency and efficacy of SG-PCA.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv: Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4310/SII.2016.V9.N4.A11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Principal Component Analysis (PCA) is a dimension reduction technique. It produces inconsistent estimators when the dimensionality is moderate to high, which is often the problem in modern large-scale applications where algorithm scalability and model interpretability are difficult to achieve, not to mention the prevalence of missing values. While existing sparse PCA methods alleviate inconsistency, they are constrained to the Gaussian assumption of classical PCA and fail to address algorithm scalability issues. We generalize sparse PCA to the broad exponential family distributions under high-dimensional setup, with built-in treatment for missing values. Meanwhile we propose a family of iterative sparse generalized PCA (SG-PCA) algorithms such that despite the non-convexity and non-smoothness of the optimization task, the loss function decreases in every iteration. In terms of ease and intuitive parameter tuning, our sparsity-inducing regularization is far superior to the popular Lasso. Furthermore, to promote overall scalability, accelerated gradient is integrated for fast convergence, while a progressive screening technique gradually squeezes out nuisance dimensions of a large-scale problem for feasible optimization. High-dimensional simulation and real data experiments demonstrate the efficiency and efficacy of SG-PCA.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
非高斯大规模应用的稀疏广义主成分分析
主成分分析(PCA)是一种降维技术。当维数中高时,它会产生不一致的估计量,这是现代大规模应用程序中经常出现的问题,其中算法可伸缩性和模型可解释性难以实现,更不用说普遍存在的缺失值。虽然现有的稀疏主成分分析方法可以缓解不一致性,但它们受限于经典主成分分析的高斯假设,无法解决算法的可扩展性问题。我们将稀疏PCA推广到高维设置下的广义指数族分布,并对缺失值进行了内置处理。同时,我们提出了一组迭代稀疏广义PCA (SG-PCA)算法,尽管优化任务具有非凸性和非光滑性,但每次迭代的损失函数都在减小。在简单和直观的参数调整方面,我们的稀疏性诱导正则化远远优于流行的Lasso。此外,为了提高整体的可扩展性,采用加速梯度快速收敛,渐进筛选技术逐步剔除大规模问题的干扰维度,实现可行的优化。高维仿真和实际数据实验验证了SG-PCA的有效性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Double Happiness: Enhancing the Coupled Gains of L-lag Coupling via Control Variates. SCOREDRIVENMODELS.JL: A JULIA PACKAGE FOR GENERALIZED AUTOREGRESSIVE SCORE MODELS Simple conditions for convergence of sequential Monte Carlo genealogies with applications Increasing the efficiency of Sequential Monte Carlo samplers through the use of approximately optimal L-kernels Particle Methods for Stochastic Differential Equation Mixed Effects Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1