{"title":"Forward Operator Estimation in Generative Models with Kernel Transfer Operators.","authors":"Zhichun Huang, Rudrasis Chakraborty, Vikas Singh","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Generative models (e.g., variational autoencoders, flow-based generative models, GANs) usually involve finding a mapping from a known distribution, e.g. Gaussian, to an estimate of the unknown data-generating distribution. This process is often carried out by searching over a class of non-linear functions (e.g., representable by a deep neural network). While effective in practice, the associated runtime/memory costs can increase rapidly, and will depend on the performance desired in an application. We propose a much cheaper (and simpler) strategy to estimate this mapping based on adapting known results in kernel transfer operators. We show that if some compromise in functionality (and scalability) is acceptable, our proposed formulation enables highly efficient distribution approximation and sampling, and offers surprisingly good empirical performance which compares favorably with powerful baselines.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"162 ","pages":"9148-9172"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10150593/pdf/nihms-1894539.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of machine learning research","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Generative models (e.g., variational autoencoders, flow-based generative models, GANs) usually involve finding a mapping from a known distribution, e.g. Gaussian, to an estimate of the unknown data-generating distribution. This process is often carried out by searching over a class of non-linear functions (e.g., representable by a deep neural network). While effective in practice, the associated runtime/memory costs can increase rapidly, and will depend on the performance desired in an application. We propose a much cheaper (and simpler) strategy to estimate this mapping based on adapting known results in kernel transfer operators. We show that if some compromise in functionality (and scalability) is acceptable, our proposed formulation enables highly efficient distribution approximation and sampling, and offers surprisingly good empirical performance which compares favorably with powerful baselines.