On latent dynamics learning in nonlinear reduced order modeling

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2025-05-01 Epub Date: 2025-01-17 DOI:10.1016/j.neunet.2025.107146
Nicola Farenga, Stefania Fresca, Simone Brivio, Andrea Manzoni
{"title":"On latent dynamics learning in nonlinear reduced order modeling","authors":"Nicola Farenga,&nbsp;Stefania Fresca,&nbsp;Simone Brivio,&nbsp;Andrea Manzoni","doi":"10.1016/j.neunet.2025.107146","DOIUrl":null,"url":null,"abstract":"<div><div>In this work, we present the novel mathematical framework of <em>latent dynamics models</em> (LDMs) for reduced order modeling of parameterized nonlinear time-dependent PDEs. Our framework casts this latter task as a nonlinear dimensionality reduction problem, while constraining the latent state to evolve accordingly to an unknown dynamical system. A time-continuous setting is employed to derive error and stability estimates for the LDM approximation of the full order model (FOM) solution. We analyze the impact of using an explicit Runge–Kutta scheme in the time-discrete setting, resulting in the <span><math><mrow><mi>Δ</mi><mtext>LDM</mtext></mrow></math></span> formulation, and further explore the learnable setting, <span><math><msub><mrow><mi>Δ</mi><mtext>LDM</mtext></mrow><mrow><mi>θ</mi></mrow></msub></math></span>, where deep neural networks approximate the discrete LDM components, while providing a bounded approximation error with respect to the FOM. Moreover, we extend the concept of parameterized Neural ODE – a possible way to build data-driven dynamical systems with varying input parameters – to be a convolutional architecture, where the input parameters information is injected by means of an affine modulation mechanism, while designing a convolutional autoencoder neural network able to retain spatial-coherence, thus enhancing interpretability at the latent level. Numerical experiments, including the Burgers’ and the advection–diffusion–reaction equations, demonstrate the framework’s ability to obtain a <em>time-continuous</em> approximation of the FOM solution, thus being able to query the LDM approximation at any given time instance while retaining a prescribed level of accuracy. Our findings highlight the remarkable potential of the proposed LDMs, representing a mathematically rigorous framework to enhance the accuracy and approximation capabilities of reduced order modeling for time-dependent parameterized PDEs.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107146"},"PeriodicalIF":6.3000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025000255","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/17 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In this work, we present the novel mathematical framework of latent dynamics models (LDMs) for reduced order modeling of parameterized nonlinear time-dependent PDEs. Our framework casts this latter task as a nonlinear dimensionality reduction problem, while constraining the latent state to evolve accordingly to an unknown dynamical system. A time-continuous setting is employed to derive error and stability estimates for the LDM approximation of the full order model (FOM) solution. We analyze the impact of using an explicit Runge–Kutta scheme in the time-discrete setting, resulting in the ΔLDM formulation, and further explore the learnable setting, ΔLDMθ, where deep neural networks approximate the discrete LDM components, while providing a bounded approximation error with respect to the FOM. Moreover, we extend the concept of parameterized Neural ODE – a possible way to build data-driven dynamical systems with varying input parameters – to be a convolutional architecture, where the input parameters information is injected by means of an affine modulation mechanism, while designing a convolutional autoencoder neural network able to retain spatial-coherence, thus enhancing interpretability at the latent level. Numerical experiments, including the Burgers’ and the advection–diffusion–reaction equations, demonstrate the framework’s ability to obtain a time-continuous approximation of the FOM solution, thus being able to query the LDM approximation at any given time instance while retaining a prescribed level of accuracy. Our findings highlight the remarkable potential of the proposed LDMs, representing a mathematically rigorous framework to enhance the accuracy and approximation capabilities of reduced order modeling for time-dependent parameterized PDEs.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
非线性降阶建模中的潜在动力学学习。
在这项工作中,我们提出了用于参数化非线性时变偏微分方程降阶建模的潜在动力学模型(ldm)的新数学框架。我们的框架将后一项任务视为非线性降维问题,同时约束潜在状态相应地演变为未知的动力系统。采用时间连续设置方法,推导了全阶模型(FOM)解的LDM近似的误差估计和稳定性估计。我们分析了在时间离散设置中使用显式龙格-库塔格式的影响,得出ΔLDM公式,并进一步探索了可学习设置ΔLDMθ,其中深度神经网络近似离散LDM组件,同时提供关于FOM的有界近似误差。此外,我们将参数化神经ODE(一种构建具有不同输入参数的数据驱动动力系统的可能方法)的概念扩展为卷积架构,其中输入参数信息通过仿射调制机制注入,同时设计了一个能够保持空间相干性的卷积自编码器神经网络,从而增强了潜在水平的可解释性。数值实验,包括Burgers方程和平流-扩散-反应方程,证明了该框架获得FOM解的时间连续近似的能力,从而能够在任何给定的时间实例中查询LDM近似,同时保持规定的精度水平。我们的研究结果突出了所提出的ldm的显着潜力,代表了一个数学上严格的框架,以提高时间相关参数化偏微分方程的降阶建模的准确性和近似能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
PGMNO: A physics-Guided mamba neural operator framework for partial differential equations A hierarchical and privacy-preserving intrusion detection framework for SAGIN-enabled IIot using graph neural networks and deep Q-learning Position-Sensitive painterly image harmonization Passivity and synchronization of fractional-order coupled neural networks with multiple weights: A PD approach M3SPCL: Multi-stage multi-grained multi-view supervised prototypical contrastive learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1