iPINNs:物理信息神经网络的增量学习

IF 8.7 2区 工程技术 Q1 Mathematics Engineering with Computers Pub Date : 2024-06-22 DOI:10.1007/s00366-024-02010-1
Aleksandr Dekhovich, Marcel H. F. Sluiter, David M. J. Tax, Miguel A. Bessa
{"title":"iPINNs:物理信息神经网络的增量学习","authors":"Aleksandr Dekhovich, Marcel H. F. Sluiter, David M. J. Tax, Miguel A. Bessa","doi":"10.1007/s00366-024-02010-1","DOIUrl":null,"url":null,"abstract":"<p>Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs). However, finding a set of neural network parameters that fulfill a PDE at the boundary and within the domain of interest can be challenging and non-unique due to the complexity of the loss landscape that needs to be traversed. Although a variety of multi-task learning and transfer learning approaches have been proposed to overcome these issues, no incremental training procedure has been proposed for PINNs. As demonstrated herein, by developing incremental PINNs (iPINNs) we can effectively mitigate such training challenges and learn multiple tasks (equations) sequentially without additional parameters for new tasks. Interestingly, we show that this also improves performance for every equation in the sequence. Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learned subnetworks. We demonstrate that previous subnetworks are a good initialization for a new equation if PDEs share similarities. We also show that iPINNs achieve lower prediction error than regular PINNs for two different scenarios: (1) learning a family of equations (e.g., 1-D convection PDE); and (2) learning PDEs resulting from a combination of processes (e.g., 1-D reaction–diffusion PDE). The ability to learn all problems with a single network together with learning more complex PDEs with better generalization than regular PINNs will open new avenues in this field.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":null,"pages":null},"PeriodicalIF":8.7000,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"iPINNs: incremental learning for Physics-informed neural networks\",\"authors\":\"Aleksandr Dekhovich, Marcel H. F. Sluiter, David M. J. Tax, Miguel A. Bessa\",\"doi\":\"10.1007/s00366-024-02010-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs). However, finding a set of neural network parameters that fulfill a PDE at the boundary and within the domain of interest can be challenging and non-unique due to the complexity of the loss landscape that needs to be traversed. Although a variety of multi-task learning and transfer learning approaches have been proposed to overcome these issues, no incremental training procedure has been proposed for PINNs. As demonstrated herein, by developing incremental PINNs (iPINNs) we can effectively mitigate such training challenges and learn multiple tasks (equations) sequentially without additional parameters for new tasks. Interestingly, we show that this also improves performance for every equation in the sequence. Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learned subnetworks. We demonstrate that previous subnetworks are a good initialization for a new equation if PDEs share similarities. We also show that iPINNs achieve lower prediction error than regular PINNs for two different scenarios: (1) learning a family of equations (e.g., 1-D convection PDE); and (2) learning PDEs resulting from a combination of processes (e.g., 1-D reaction–diffusion PDE). The ability to learn all problems with a single network together with learning more complex PDEs with better generalization than regular PINNs will open new avenues in this field.</p>\",\"PeriodicalId\":11696,\"journal\":{\"name\":\"Engineering with Computers\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":8.7000,\"publicationDate\":\"2024-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering with Computers\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s00366-024-02010-1\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering with Computers","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s00366-024-02010-1","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0

摘要

物理信息神经网络(PINN)近来已成为求解偏微分方程(PDE)的有力工具。然而,由于需要穿越的损失景观的复杂性,要在边界和感兴趣的域内找到一组满足偏微分方程的神经网络参数可能具有挑战性和非唯一性。虽然已经提出了多种多任务学习和迁移学习方法来克服这些问题,但还没有针对 PINN 提出增量训练程序。正如本文所展示的,通过开发增量 PINNs(iPINNs),我们可以有效地缓解这些训练难题,并连续学习多个任务(方程),而无需为新任务添加参数。有趣的是,我们发现这还能提高序列中每个方程的性能。我们的方法通过为每个 PDE 创建自己的子网络,并允许每个子网络与之前学习的子网络重叠,从最简单的 PDE 开始学习多个 PDE。我们证明,如果 PDE 有相似之处,以前的子网络对新方程来说是一个很好的初始化。我们还证明,在两种不同情况下,iPINN 比普通 PINN 的预测误差更低:(1) 学习一个方程组(如一维对流 PDE);(2) 学习由多个过程组合而成的 PDE(如一维反应-扩散 PDE)。用一个网络学习所有问题的能力,以及学习比普通 PINN 更复杂的 PDE 的能力,将为这一领域开辟新的道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
iPINNs: incremental learning for Physics-informed neural networks

Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs). However, finding a set of neural network parameters that fulfill a PDE at the boundary and within the domain of interest can be challenging and non-unique due to the complexity of the loss landscape that needs to be traversed. Although a variety of multi-task learning and transfer learning approaches have been proposed to overcome these issues, no incremental training procedure has been proposed for PINNs. As demonstrated herein, by developing incremental PINNs (iPINNs) we can effectively mitigate such training challenges and learn multiple tasks (equations) sequentially without additional parameters for new tasks. Interestingly, we show that this also improves performance for every equation in the sequence. Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learned subnetworks. We demonstrate that previous subnetworks are a good initialization for a new equation if PDEs share similarities. We also show that iPINNs achieve lower prediction error than regular PINNs for two different scenarios: (1) learning a family of equations (e.g., 1-D convection PDE); and (2) learning PDEs resulting from a combination of processes (e.g., 1-D reaction–diffusion PDE). The ability to learn all problems with a single network together with learning more complex PDEs with better generalization than regular PINNs will open new avenues in this field.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Engineering with Computers
Engineering with Computers 工程技术-工程:机械
CiteScore
16.50
自引率
2.30%
发文量
203
审稿时长
9 months
期刊介绍: Engineering with Computers is an international journal dedicated to simulation-based engineering. It features original papers and comprehensive reviews on technologies supporting simulation-based engineering, along with demonstrations of operational simulation-based engineering systems. The journal covers various technical areas such as adaptive simulation techniques, engineering databases, CAD geometry integration, mesh generation, parallel simulation methods, simulation frameworks, user interface technologies, and visualization techniques. It also encompasses a wide range of application areas where engineering technologies are applied, spanning from automotive industry applications to medical device design.
期刊最新文献
A universal material model subroutine for soft matter systems A second-generation URANS model (STRUCT- $$\epsilon $$ ) applied to a generic side mirror and its impact on sound generation Multiphysics discovery with moving boundaries using Ensemble SINDy and peridynamic differential operator Adaptive Kriging-based method with learning function allocation scheme and hybrid convergence criterion for efficient structural reliability analysis A new kernel-based approach for solving general fractional (integro)-differential-algebraic equations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1