Physics-Informed Neural Networks with Trust-Region Sequential Quadratic Programming

Xiaoran Cheng, Sen Na
{"title":"Physics-Informed Neural Networks with Trust-Region Sequential Quadratic Programming","authors":"Xiaoran Cheng, Sen Na","doi":"arxiv-2409.10777","DOIUrl":null,"url":null,"abstract":"Physics-Informed Neural Networks (PINNs) represent a significant advancement\nin Scientific Machine Learning (SciML), which integrate physical domain\nknowledge into an empirical loss function as soft constraints and apply\nexisting machine learning methods to train the model. However, recent research\nhas noted that PINNs may fail to learn relatively complex Partial Differential\nEquations (PDEs). This paper addresses the failure modes of PINNs by\nintroducing a novel, hard-constrained deep learning method -- trust-region\nSequential Quadratic Programming (trSQP-PINN). In contrast to directly training\nthe penalized soft-constrained loss as in PINNs, our method performs a\nlinear-quadratic approximation of the hard-constrained loss, while leveraging\nthe soft-constrained loss to adaptively adjust the trust-region radius. We only\ntrust our model approximations and make updates within the trust region, and\nsuch an updating manner can overcome the ill-conditioning issue of PINNs. We\nalso address the computational bottleneck of second-order SQP methods by\nemploying quasi-Newton updates for second-order information, and importantly,\nwe introduce a simple pretraining step to further enhance training efficiency\nof our method. We demonstrate the effectiveness of trSQP-PINN through extensive\nexperiments. Compared to existing hard-constrained methods for PINNs, such as\npenalty methods and augmented Lagrangian methods, trSQP-PINN significantly\nimproves the accuracy of the learned PDE solutions, achieving up to 1-3 orders\nof magnitude lower errors. Additionally, our pretraining step is generally\neffective for other hard-constrained methods, and experiments have shown the\nrobustness of our method against both problem-specific parameters and algorithm\ntuning parameters.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"82 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Numerical Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10777","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Physics-Informed Neural Networks (PINNs) represent a significant advancement in Scientific Machine Learning (SciML), which integrate physical domain knowledge into an empirical loss function as soft constraints and apply existing machine learning methods to train the model. However, recent research has noted that PINNs may fail to learn relatively complex Partial Differential Equations (PDEs). This paper addresses the failure modes of PINNs by introducing a novel, hard-constrained deep learning method -- trust-region Sequential Quadratic Programming (trSQP-PINN). In contrast to directly training the penalized soft-constrained loss as in PINNs, our method performs a linear-quadratic approximation of the hard-constrained loss, while leveraging the soft-constrained loss to adaptively adjust the trust-region radius. We only trust our model approximations and make updates within the trust region, and such an updating manner can overcome the ill-conditioning issue of PINNs. We also address the computational bottleneck of second-order SQP methods by employing quasi-Newton updates for second-order information, and importantly, we introduce a simple pretraining step to further enhance training efficiency of our method. We demonstrate the effectiveness of trSQP-PINN through extensive experiments. Compared to existing hard-constrained methods for PINNs, such as penalty methods and augmented Lagrangian methods, trSQP-PINN significantly improves the accuracy of the learned PDE solutions, achieving up to 1-3 orders of magnitude lower errors. Additionally, our pretraining step is generally effective for other hard-constrained methods, and experiments have shown the robustness of our method against both problem-specific parameters and algorithm tuning parameters.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
采用信任区域顺序二次编程的物理信息神经网络
物理信息神经网络(PINNs)是科学机器学习(SciML)的一大进步,它将物理领域的知识整合到经验损失函数中作为软约束,并应用现有的机器学习方法来训练模型。然而,最近的研究指出,PINNs 可能无法学习相对复杂的偏微分方程(PDE)。本文引入了一种新颖的硬约束深度学习方法--信任区域连续二次编程(trust-regionSequential Quadratic Programming,trSQP-PINN),从而解决了PINN的失败模式。与在 PINNs 中直接训练受惩罚的软约束损失不同,我们的方法对硬约束损失进行线性二次逼近,同时利用软约束损失自适应地调整信任区域半径。我们只信任我们的模型近似值,并在信任区域内进行更新,这样的更新方式可以克服 PINN 的条件不良问题。我们还通过对二阶信息进行准牛顿更新,解决了二阶 SQP 方法的计算瓶颈问题,更重要的是,我们引入了一个简单的预训练步骤,进一步提高了方法的训练效率。我们通过大量实验证明了 trSQP-PINN 的有效性。与现有的硬约束 PINN 方法(如penalty 方法和增强拉格朗日方法)相比,trSQP-PINN 显著提高了所学 PDE 解的准确性,误差降低了 1-3 个数量级。此外,我们的预训练步骤对其他硬约束方法也普遍有效,实验表明我们的方法对特定问题参数和算法调整参数都具有稳健性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Lightweight, Geometrically Flexible Fast Algorithm for the Evaluation of Layer and Volume Potentials Adaptive Time-Step Semi-Implicit One-Step Taylor Scheme for Stiff Ordinary Differential Equations Conditions aux limites fortement non lin{é}aires pour les {é}quations d'Euler de la dynamique des gaz Fully guaranteed and computable error bounds on the energy for periodic Kohn-Sham equations with convex density functionals A novel Mortar Method Integration using Radial Basis Functions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1