使用归一化流的神经功能先验变分推理:在微分方程和算子学习问题上的应用

IF 4.5 2区 工程技术 Q1 MATHEMATICS, APPLIED Applied Mathematics and Mechanics-English Edition Pub Date : 2023-07-03 DOI:10.1007/s10483-023-2997-7
Xuhui Meng
{"title":"使用归一化流的神经功能先验变分推理:在微分方程和算子学习问题上的应用","authors":"Xuhui Meng","doi":"10.1007/s10483-023-2997-7","DOIUrl":null,"url":null,"abstract":"<div><p>Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws. Physics-informed neural networks (PINNs) and deep operator networks (DeepONets) are two such models. The former encodes the physical laws via the automatic differentiation, while the latter learns the hidden physics from data. Generally, the noisy and limited observational data as well as the over-parameterization in neural networks (NNs) result in uncertainty in predictions from deep learning models. In paper “MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. <i>Journal of Computational Physics</i>, <b>457</b>, 111073 (2022)”, a Bayesian framework based on the generative adversarial networks (GANs) has been proposed as a unified model to quantify uncertainties in predictions of PINNs as well as DeepONets. Specifically, the proposed approach in “MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. <i>Journal of Computational Physics</i>, <b>457</b>, 111073 (2022)” has two stages: (i) prior learning, and (ii) posterior estimation. At the first stage, the GANs are utilized to learn a functional prior either from a prescribed function distribution, e.g., the Gaussian process, or from historical data and available physics. At the second stage, the Hamiltonian Monte Carlo (HMC) method is utilized to estimate the posterior in the latent space of GANs. However, the vanilla HMC does not support the mini-batch training, which limits its applications in problems with big data. In the present work, we propose to use the normalizing flow (NF) models in the context of variational inference (VI), which naturally enables the mini-batch training, as the alternative to HMC for posterior estimation in the latent space of GANs. A series of numerical experiments, including a nonlinear differential equation problem and a 100-dimensional (100D) Darcy problem, are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the “gold rule” HMC. Moreover, the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation (PDE) problems with big data.</p></div>","PeriodicalId":55498,"journal":{"name":"Applied Mathematics and Mechanics-English Edition","volume":"44 7","pages":"1111 - 1124"},"PeriodicalIF":4.5000,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10483-023-2997-7.pdf","citationCount":"2","resultStr":"{\"title\":\"Variational inference in neural functional prior using normalizing flows: application to differential equation and operator learning problems\",\"authors\":\"Xuhui Meng\",\"doi\":\"10.1007/s10483-023-2997-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws. Physics-informed neural networks (PINNs) and deep operator networks (DeepONets) are two such models. The former encodes the physical laws via the automatic differentiation, while the latter learns the hidden physics from data. Generally, the noisy and limited observational data as well as the over-parameterization in neural networks (NNs) result in uncertainty in predictions from deep learning models. In paper “MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. <i>Journal of Computational Physics</i>, <b>457</b>, 111073 (2022)”, a Bayesian framework based on the generative adversarial networks (GANs) has been proposed as a unified model to quantify uncertainties in predictions of PINNs as well as DeepONets. Specifically, the proposed approach in “MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. <i>Journal of Computational Physics</i>, <b>457</b>, 111073 (2022)” has two stages: (i) prior learning, and (ii) posterior estimation. At the first stage, the GANs are utilized to learn a functional prior either from a prescribed function distribution, e.g., the Gaussian process, or from historical data and available physics. At the second stage, the Hamiltonian Monte Carlo (HMC) method is utilized to estimate the posterior in the latent space of GANs. However, the vanilla HMC does not support the mini-batch training, which limits its applications in problems with big data. In the present work, we propose to use the normalizing flow (NF) models in the context of variational inference (VI), which naturally enables the mini-batch training, as the alternative to HMC for posterior estimation in the latent space of GANs. A series of numerical experiments, including a nonlinear differential equation problem and a 100-dimensional (100D) Darcy problem, are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the “gold rule” HMC. Moreover, the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation (PDE) problems with big data.</p></div>\",\"PeriodicalId\":55498,\"journal\":{\"name\":\"Applied Mathematics and Mechanics-English Edition\",\"volume\":\"44 7\",\"pages\":\"1111 - 1124\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2023-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10483-023-2997-7.pdf\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Mechanics-English Edition\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10483-023-2997-7\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Mechanics-English Edition","FirstCategoryId":"5","ListUrlMain":"https://link.springer.com/article/10.1007/s10483-023-2997-7","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 2

摘要

基于物理的深度学习最近成为利用观测数据和可用物理定律的有效工具。物理信息神经网络(pinn)和深度算子网络(DeepONets)就是两种这样的模型。前者通过自动微分对物理规律进行编码,后者从数据中学习隐藏的物理规律。一般来说,噪声和有限的观测数据以及神经网络(nn)中的过度参数化导致深度学习模型预测的不确定性。计算物理学报,457,111073(2022)”,基于生成对抗网络(gan)的贝叶斯框架已被提出作为统一模型来量化pinn和deeponet预测中的不确定性。计算物理学报,457,111073(2022)“有两个阶段:(i)先验学习,(ii)后验估计。在第一阶段,gan被用来从规定的函数分布(例如高斯过程)或历史数据和可用的物理中学习函数先验。在第二阶段,利用哈密顿蒙特卡罗(HMC)方法估计gan潜在空间的后验。然而,香草HMC不支持小批量训练,这限制了它在大数据问题中的应用。在目前的工作中,我们建议在变分推理(VI)的背景下使用归一化流(NF)模型,这自然使小批量训练成为可能,作为HMC在gan的潜在空间中进行后验估计的替代方法。通过一系列的数值实验,包括一个非线性微分方程问题和一个100维(100D)达西问题,证明了具有全批/小批训练的NFs能够达到与“黄金法则”HMC相似的精度。此外,神经网络的小批量训练使其成为求解大数据高维偏微分方程(PDE)问题时量化不确定性的一个很有前景的工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Variational inference in neural functional prior using normalizing flows: application to differential equation and operator learning problems

Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws. Physics-informed neural networks (PINNs) and deep operator networks (DeepONets) are two such models. The former encodes the physical laws via the automatic differentiation, while the latter learns the hidden physics from data. Generally, the noisy and limited observational data as well as the over-parameterization in neural networks (NNs) result in uncertainty in predictions from deep learning models. In paper “MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. Journal of Computational Physics, 457, 111073 (2022)”, a Bayesian framework based on the generative adversarial networks (GANs) has been proposed as a unified model to quantify uncertainties in predictions of PINNs as well as DeepONets. Specifically, the proposed approach in “MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. Journal of Computational Physics, 457, 111073 (2022)” has two stages: (i) prior learning, and (ii) posterior estimation. At the first stage, the GANs are utilized to learn a functional prior either from a prescribed function distribution, e.g., the Gaussian process, or from historical data and available physics. At the second stage, the Hamiltonian Monte Carlo (HMC) method is utilized to estimate the posterior in the latent space of GANs. However, the vanilla HMC does not support the mini-batch training, which limits its applications in problems with big data. In the present work, we propose to use the normalizing flow (NF) models in the context of variational inference (VI), which naturally enables the mini-batch training, as the alternative to HMC for posterior estimation in the latent space of GANs. A series of numerical experiments, including a nonlinear differential equation problem and a 100-dimensional (100D) Darcy problem, are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the “gold rule” HMC. Moreover, the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation (PDE) problems with big data.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.70
自引率
9.10%
发文量
106
审稿时长
2.0 months
期刊介绍: Applied Mathematics and Mechanics is the English version of a journal on applied mathematics and mechanics published in the People''s Republic of China. Our Editorial Committee, headed by Professor Chien Weizang, Ph.D., President of Shanghai University, consists of scientists in the fields of applied mathematics and mechanics from all over China. Founded by Professor Chien Weizang in 1980, Applied Mathematics and Mechanics became a bimonthly in 1981 and then a monthly in 1985. It is a comprehensive journal presenting original research papers on mechanics, mathematical methods and modeling in mechanics as well as applied mathematics relevant to neoteric mechanics.
期刊最新文献
Fracture of films caused by uniaxial tensions: a numerical model Dynamic stability analysis of porous functionally graded beams under hygro-thermal loading using nonlocal strain gradient integral model Variable stiffness tuned particle dampers for vibration control of cantilever boring bars Wrinkling in graded core/shell systems using symplectic formulation Nonlinear analysis on electrical properties in a bended composite piezoelectric semiconductor beam
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1