Tianyu Li , Yiye Zou , Shufan Zou , Xinghua Chang , Laiping Zhang , Xiaogang Deng
{"title":"Learning to solve PDEs with finite volume-informed neural networks in a data-free approach","authors":"Tianyu Li , Yiye Zou , Shufan Zou , Xinghua Chang , Laiping Zhang , Xiaogang Deng","doi":"10.1016/j.jcp.2025.113919","DOIUrl":null,"url":null,"abstract":"<div><div>Partial differential equations (PDEs) play a crucial role in scientific computing. Recent advancements in deep learning have led to the development of both data-driven and Physics-Informed Neural Networks (PINNs) for efficiently solving PDEs, though challenges remain in data acquisition and generalization for both approaches. This paper presents a computational framework that combines the Finite Volume Method (FVM) with Graph Neural Networks (GNNs) to construct the PDE-loss, enabling direct parametric PDE solving during training without the need for precomputed data. By exploiting GNNs' flexibility on unstructured grids, this framework extends its applicability across various geometries, physical equations and boundary conditions. The core innovation lies in an unsupervised training algorithm that utilizes GPU parallel computing to create a fully differentiable finite volume discretization process, such as gradient reconstruction and surface integration. Our results demonstrate that the trained GNN model can efficiently solve multiple PDEs with varying boundary conditions and source terms in a single training session, with the number of iterations required to reach a steady-state solution during inference stage being around 25% of that required by traditional second-order CFD solvers. The implementation code of this paper is available on GitHub at <span><span>https://github.com/Litianyu141/Gen-FVGN-steady</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":352,"journal":{"name":"Journal of Computational Physics","volume":"530 ","pages":"Article 113919"},"PeriodicalIF":3.8000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Physics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0021999125002025","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/10 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Partial differential equations (PDEs) play a crucial role in scientific computing. Recent advancements in deep learning have led to the development of both data-driven and Physics-Informed Neural Networks (PINNs) for efficiently solving PDEs, though challenges remain in data acquisition and generalization for both approaches. This paper presents a computational framework that combines the Finite Volume Method (FVM) with Graph Neural Networks (GNNs) to construct the PDE-loss, enabling direct parametric PDE solving during training without the need for precomputed data. By exploiting GNNs' flexibility on unstructured grids, this framework extends its applicability across various geometries, physical equations and boundary conditions. The core innovation lies in an unsupervised training algorithm that utilizes GPU parallel computing to create a fully differentiable finite volume discretization process, such as gradient reconstruction and surface integration. Our results demonstrate that the trained GNN model can efficiently solve multiple PDEs with varying boundary conditions and source terms in a single training session, with the number of iterations required to reach a steady-state solution during inference stage being around 25% of that required by traditional second-order CFD solvers. The implementation code of this paper is available on GitHub at https://github.com/Litianyu141/Gen-FVGN-steady.
期刊介绍:
Journal of Computational Physics thoroughly treats the computational aspects of physical problems, presenting techniques for the numerical solution of mathematical equations arising in all areas of physics. The journal seeks to emphasize methods that cross disciplinary boundaries.
The Journal of Computational Physics also publishes short notes of 4 pages or less (including figures, tables, and references but excluding title pages). Letters to the Editor commenting on articles already published in this Journal will also be considered. Neither notes nor letters should have an abstract.