{"title":"A Nonlinear Matrix Decomposition for Mining the Zeros of Sparse Data","authors":"L. Saul","doi":"10.1137/21m1405769","DOIUrl":null,"url":null,"abstract":". We describe a simple iterative solution to a widely recurring problem in multivariate data analysis: given a sparse nonnegative matrix X , how to estimate a low-rank matrix Θ such that X ≈ f ( Θ ), where f is an elementwise nonlinearity? We develop a latent variable model for this problem and consider those sparsifying nonlinearities, popular in neural networks, that map all negative values to zero. The model seeks to explain the variability of sparse high-dimensional data in terms of a smaller number of degrees of freedom. We show that exact inference in this model is tractable and derive an expectation-maximization (EM) algorithm to estimate the low-rank matrix Θ . Notably, we do not parameterize Θ as a product of smaller matrices to be alternately optimized; instead, we estimate Θ directly via the singular value decomposition of matrices that are repeatedly inferred (at each iteration of the EM algorithm) from the model’s posterior distribution. We use the model to analyze large sparse matrices that arise from data sets of binary, grayscale, and color images. In all of these cases, we find that the model discovers much lower-rank decompositions than purely linear approaches.","PeriodicalId":74797,"journal":{"name":"SIAM journal on mathematics of data science","volume":"147 7 1","pages":"431-463"},"PeriodicalIF":1.9000,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM journal on mathematics of data science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/21m1405769","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 3
Abstract
. We describe a simple iterative solution to a widely recurring problem in multivariate data analysis: given a sparse nonnegative matrix X , how to estimate a low-rank matrix Θ such that X ≈ f ( Θ ), where f is an elementwise nonlinearity? We develop a latent variable model for this problem and consider those sparsifying nonlinearities, popular in neural networks, that map all negative values to zero. The model seeks to explain the variability of sparse high-dimensional data in terms of a smaller number of degrees of freedom. We show that exact inference in this model is tractable and derive an expectation-maximization (EM) algorithm to estimate the low-rank matrix Θ . Notably, we do not parameterize Θ as a product of smaller matrices to be alternately optimized; instead, we estimate Θ directly via the singular value decomposition of matrices that are repeatedly inferred (at each iteration of the EM algorithm) from the model’s posterior distribution. We use the model to analyze large sparse matrices that arise from data sets of binary, grayscale, and color images. In all of these cases, we find that the model discovers much lower-rank decompositions than purely linear approaches.