Interpretable A-posteriori error indication for graph neural network surrogate models

IF 6.9 1区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY Computer Methods in Applied Mechanics and Engineering Pub Date : 2024-11-15 DOI:10.1016/j.cma.2024.117509
Shivam Barwey , Hojin Kim , Romit Maulik
{"title":"Interpretable A-posteriori error indication for graph neural network surrogate models","authors":"Shivam Barwey ,&nbsp;Hojin Kim ,&nbsp;Romit Maulik","doi":"10.1016/j.cma.2024.117509","DOIUrl":null,"url":null,"abstract":"<div><div>Data-driven surrogate modeling has surged in capability in recent years with the emergence of graph neural networks (GNNs), which can operate directly on mesh-based representations of data. The goal of this work is to introduce an interpretability enhancement procedure for GNNs, with application to unstructured mesh-based fluid dynamics modeling. Given a black-box baseline GNN model, the end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task while retaining the predictive capability of the baseline. These structures identified by the interpretable GNNs are adaptively produced in the forward pass and serve as explainable links between the baseline model architecture, the optimization goal, and known problem-specific physics. Additionally, through a regularization procedure, the interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error, adding a novel interpretable error-tagging capability to baseline models. Demonstrations are performed using unstructured flow field data sourced from flow over a backward-facing step at high Reynolds numbers, with geometry extrapolations demonstrated for ramp and wall-mounted cube configurations.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"433 ","pages":"Article 117509"},"PeriodicalIF":6.9000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Methods in Applied Mechanics and Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045782524007631","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Data-driven surrogate modeling has surged in capability in recent years with the emergence of graph neural networks (GNNs), which can operate directly on mesh-based representations of data. The goal of this work is to introduce an interpretability enhancement procedure for GNNs, with application to unstructured mesh-based fluid dynamics modeling. Given a black-box baseline GNN model, the end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task while retaining the predictive capability of the baseline. These structures identified by the interpretable GNNs are adaptively produced in the forward pass and serve as explainable links between the baseline model architecture, the optimization goal, and known problem-specific physics. Additionally, through a regularization procedure, the interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error, adding a novel interpretable error-tagging capability to baseline models. Demonstrations are performed using unstructured flow field data sourced from flow over a backward-facing step at high Reynolds numbers, with geometry extrapolations demonstrated for ramp and wall-mounted cube configurations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
图神经网络代用模型的可解释 A 后验误差指示
近年来,随着图神经网络(GNN)的出现,数据驱动的代理建模能力突飞猛进。这项工作的目标是为 GNN 引入一种可解释性增强程序,并将其应用于基于非结构网格的流体动力学建模。给定一个黑盒子基线 GNN 模型,最终结果是一个可解释的 GNN 模型,它在物理空间中分离出与子图相对应的区域,这些区域与预测任务有内在联系,同时保留了基线的预测能力。这些由可解释 GNN 确定的结构会在前向传递中自适应地产生,并作为基线模型架构、优化目标和已知特定问题物理之间的可解释链接。此外,通过正则化程序,可解释 GNN 还可用于在推理过程中识别与大部分预期预测误差相对应的图节点,从而为基线模型增加了一种新颖的可解释误差标记功能。演示使用了非结构化流场数据,这些数据来自高雷诺数条件下流经后向台阶的水流,并演示了斜坡和壁装立方体配置的几何外推。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
12.70
自引率
15.30%
发文量
719
审稿时长
44 days
期刊介绍: Computer Methods in Applied Mechanics and Engineering stands as a cornerstone in the realm of computational science and engineering. With a history spanning over five decades, the journal has been a key platform for disseminating papers on advanced mathematical modeling and numerical solutions. Interdisciplinary in nature, these contributions encompass mechanics, mathematics, computer science, and various scientific disciplines. The journal welcomes a broad range of computational methods addressing the simulation, analysis, and design of complex physical problems, making it a vital resource for researchers in the field.
期刊最新文献
Editorial Board Fatigue-constrained topology optimization method for orthotropic materials based on an expanded Tsai-Hill criterion Modelling of thermo-mechanical coupling effects in rock masses using an enriched nodal-based continuous-discontinuous deformation analysis method Interpretable A-posteriori error indication for graph neural network surrogate models Adaptive parameter selection in nudging based data assimilation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1