An introduction to deep generative modeling

Q1 Mathematics GAMM Mitteilungen Pub Date : 2021-05-28 DOI:10.1002/gamm.202100008
Lars Ruthotto, Eldad Haber
{"title":"An introduction to deep generative modeling","authors":"Lars Ruthotto,&nbsp;Eldad Haber","doi":"10.1002/gamm.202100008","DOIUrl":null,"url":null,"abstract":"<p>Deep generative models (DGM) are neural networks with many hidden layers trained to approximate complicated, high-dimensional probability distributions using samples. When trained successfully, we can use the DGM to estimate the likelihood of each observation and to create new samples from the underlying distribution. Developing DGMs has become one of the most hotly researched fields in artificial intelligence in recent years. The literature on DGMs has become vast and is growing rapidly. Some advances have even reached the public sphere, for example, the recent successes in generating realistic-looking images, voices, or movies; so-called deep fakes. Despite these successes, several mathematical and practical issues limit the broader use of DGMs: given a specific dataset, it remains challenging to design and train a DGM and even more challenging to find out why a particular model is or is not effective. To help advance the theoretical understanding of DGMs, we introduce DGMs and provide a concise mathematical framework for modeling the three most popular approaches: normalizing flows, variational autoencoders, and generative adversarial networks. We illustrate the advantages and disadvantages of these basic approaches using numerical experiments. Our goal is to enable and motivate the reader to contribute to this proliferating research area. Our presentation also emphasizes relations between generative modeling and optimal transport.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100008","citationCount":"120","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GAMM Mitteilungen","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/gamm.202100008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 120

Abstract

Deep generative models (DGM) are neural networks with many hidden layers trained to approximate complicated, high-dimensional probability distributions using samples. When trained successfully, we can use the DGM to estimate the likelihood of each observation and to create new samples from the underlying distribution. Developing DGMs has become one of the most hotly researched fields in artificial intelligence in recent years. The literature on DGMs has become vast and is growing rapidly. Some advances have even reached the public sphere, for example, the recent successes in generating realistic-looking images, voices, or movies; so-called deep fakes. Despite these successes, several mathematical and practical issues limit the broader use of DGMs: given a specific dataset, it remains challenging to design and train a DGM and even more challenging to find out why a particular model is or is not effective. To help advance the theoretical understanding of DGMs, we introduce DGMs and provide a concise mathematical framework for modeling the three most popular approaches: normalizing flows, variational autoencoders, and generative adversarial networks. We illustrate the advantages and disadvantages of these basic approaches using numerical experiments. Our goal is to enable and motivate the reader to contribute to this proliferating research area. Our presentation also emphasizes relations between generative modeling and optimal transport.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
深度生成建模的介绍
深度生成模型(DGM)是具有许多隐藏层的神经网络,经过训练可以使用样本近似复杂的高维概率分布。当训练成功时,我们可以使用DGM来估计每个观测值的可能性,并从底层分布中创建新的样本。发展dgm是近年来人工智能研究的热点之一。关于dgm的文献已经变得非常庞大,并且正在迅速增长。有些进步甚至已经进入了公共领域,例如,最近在生成逼真的图像、声音或电影方面取得了成功;所谓的深度造假。尽管取得了这些成功,但一些数学和实际问题限制了DGM的广泛使用:给定特定的数据集,设计和训练DGM仍然具有挑战性,而找出特定模型有效或无效的原因更具挑战性。为了帮助推进对dgm的理论理解,我们介绍了dgm,并提供了一个简洁的数学框架来建模三种最流行的方法:归一化流、变分自编码器和生成对抗网络。我们用数值实验说明了这些基本方法的优缺点。我们的目标是鼓励读者为这个蓬勃发展的研究领域做出贡献。我们的报告还强调了生成建模和最优运输之间的关系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
GAMM Mitteilungen
GAMM Mitteilungen Mathematics-Applied Mathematics
CiteScore
8.80
自引率
0.00%
发文量
23
期刊最新文献
Issue Information Regularizations of forward-backward parabolic PDEs Parallel two-scale finite element implementation of a system with varying microstructure Issue Information Low Mach number limit of a diffuse interface model for two-phase flows of compressible viscous fluids
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1