Taorui Wang , Zheyuan Hu , Kenji Kawaguchi , Zhongqiang Zhang , George Em Karniadakis
{"title":"Tensor neural networks for high-dimensional Fokker–Planck equations","authors":"Taorui Wang , Zheyuan Hu , Kenji Kawaguchi , Zhongqiang Zhang , George Em Karniadakis","doi":"10.1016/j.neunet.2025.107165","DOIUrl":null,"url":null,"abstract":"<div><div>We solve high-dimensional steady-state Fokker–Planck equations on the whole space by applying tensor neural networks. The tensor networks are a linear combination of tensor products of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. The use of tensor feedforward networks allows us to efficiently exploit auto-differentiation (in physical variables) in major Python packages while using radial basis functions can fully avoid auto-differentiation, which is rather expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper bounded domain or numerical support for the Fokker–Planck equation. To better train the tensor radial basis function networks, we impose some constraints on parameters, which lead to relatively high accuracy. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for steady-state Fokker–Planck equations from two to ten dimensions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107165"},"PeriodicalIF":6.0000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025000449","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
We solve high-dimensional steady-state Fokker–Planck equations on the whole space by applying tensor neural networks. The tensor networks are a linear combination of tensor products of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. The use of tensor feedforward networks allows us to efficiently exploit auto-differentiation (in physical variables) in major Python packages while using radial basis functions can fully avoid auto-differentiation, which is rather expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper bounded domain or numerical support for the Fokker–Planck equation. To better train the tensor radial basis function networks, we impose some constraints on parameters, which lead to relatively high accuracy. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for steady-state Fokker–Planck equations from two to ten dimensions.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.