Preconditioned FEM-based neural networks for solving incompressible fluid flows and related inverse problems

IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Journal of Computational and Applied Mathematics Pub Date : 2025-04-05 DOI:10.1016/j.cam.2025.116663
Franziska Griese, Fabian Hoppe, Alexander Rüttgers, Philipp Knechtges
{"title":"Preconditioned FEM-based neural networks for solving incompressible fluid flows and related inverse problems","authors":"Franziska Griese,&nbsp;Fabian Hoppe,&nbsp;Alexander Rüttgers,&nbsp;Philipp Knechtges","doi":"10.1016/j.cam.2025.116663","DOIUrl":null,"url":null,"abstract":"<div><div>The numerical simulation and optimization of technical systems described by partial differential equations is expensive, especially in multi-query scenarios in which the underlying equations have to be solved for different parameters. A comparatively new approach in this context is to combine the good approximation properties of neural networks (for parameter dependence) with the classical finite element method (for discretization). However, instead of considering the solution mapping of the PDE from the parameter space into the FEM-discretized solution space as a purely data-driven regression problem, so-called physically informed regression problems have proven to be useful. In these, the equation residual is minimized during the training of the neural network, i.e. the neural network “learns” the physics underlying the problem. In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively, namely stationary Stokes and stationary Navier–Stokes equations. In particular, we propose a modification of the existing approach: Instead of minimizing the plain vanilla equation residual during training, we minimize the equation residual modified by a preconditioner. By analogy with the linear case, this also improves the condition in the present non-linear case. Our numerical examples demonstrate that this approach significantly reduces the training effort and greatly increases accuracy and generalizability. Finally, we show the application of the resulting parameterized model to a related inverse problem.</div></div>","PeriodicalId":50226,"journal":{"name":"Journal of Computational and Applied Mathematics","volume":"469 ","pages":"Article 116663"},"PeriodicalIF":2.6000,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational and Applied Mathematics","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0377042725001773","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

The numerical simulation and optimization of technical systems described by partial differential equations is expensive, especially in multi-query scenarios in which the underlying equations have to be solved for different parameters. A comparatively new approach in this context is to combine the good approximation properties of neural networks (for parameter dependence) with the classical finite element method (for discretization). However, instead of considering the solution mapping of the PDE from the parameter space into the FEM-discretized solution space as a purely data-driven regression problem, so-called physically informed regression problems have proven to be useful. In these, the equation residual is minimized during the training of the neural network, i.e. the neural network “learns” the physics underlying the problem. In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively, namely stationary Stokes and stationary Navier–Stokes equations. In particular, we propose a modification of the existing approach: Instead of minimizing the plain vanilla equation residual during training, we minimize the equation residual modified by a preconditioner. By analogy with the linear case, this also improves the condition in the present non-linear case. Our numerical examples demonstrate that this approach significantly reduces the training effort and greatly increases accuracy and generalizability. Finally, we show the application of the resulting parameterized model to a related inverse problem.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求解不可压缩流体流动及相关逆问题的预条件有限元神经网络
用偏微分方程描述的技术系统的数值模拟和优化是昂贵的,特别是在需要求解不同参数的底层方程的多查询场景中。在这种情况下,一种相对较新的方法是将神经网络良好的近似特性(用于参数依赖性)与经典有限元方法(用于离散化)相结合。然而,不把PDE从参数空间到fem离散解空间的解映射视为纯粹的数据驱动的回归问题,所谓的物理通知回归问题已被证明是有用的。在这些方法中,在神经网络的训练过程中,方程残差被最小化,即神经网络“学习”了问题背后的物理原理。在本文中,我们将此方法分别推广到鞍点和非线性流体动力学问题,即平稳Stokes方程和平稳Navier-Stokes方程。特别地,我们提出了一种对现有方法的修改:在训练过程中,我们不是最小化普通的香草方程残差,而是最小化由预条件修改的方程残差。通过与线性情况的类比,这也改善了目前非线性情况下的条件。我们的数值例子表明,这种方法大大减少了训练工作量,大大提高了准确性和泛化性。最后,我们展示了所得到的参数化模型在相关逆问题中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.40
自引率
4.20%
发文量
437
审稿时长
3.0 months
期刊介绍: The Journal of Computational and Applied Mathematics publishes original papers of high scientific value in all areas of computational and applied mathematics. The main interest of the Journal is in papers that describe and analyze new computational techniques for solving scientific or engineering problems. Also the improved analysis, including the effectiveness and applicability, of existing methods and algorithms is of importance. The computational efficiency (e.g. the convergence, stability, accuracy, ...) should be proved and illustrated by nontrivial numerical examples. Papers describing only variants of existing methods, without adding significant new computational properties are not of interest. The audience consists of: applied mathematicians, numerical analysts, computational scientists and engineers.
期刊最新文献
New relaxation modulus-based iterative method for large and sparse implicit complementarity problem Enhancing efficiency of proximal gradient method with predicted and corrected step sizes Optimal alignment of Lorentz orientation and generalization to matrix Lie groups A novel twin extreme learning machine for regression problems The alternating Halpern-Mann iteration for families of maps
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1