On a computational paradigm for a class of fractional order direct and inverse problems in terms of physics-informed neural networks with the attention mechanism
M. Srati , A. Oulmelk , L. Afraites , A. Hadri , M.A. Zaky , A.S. Hendy
{"title":"On a computational paradigm for a class of fractional order direct and inverse problems in terms of physics-informed neural networks with the attention mechanism","authors":"M. Srati , A. Oulmelk , L. Afraites , A. Hadri , M.A. Zaky , A.S. Hendy","doi":"10.1016/j.jocs.2024.102514","DOIUrl":null,"url":null,"abstract":"<div><div>Physics-Informed Neural Networks (PINNs) have recently gained significant attention for their ability to solve both forward and inverse problems associated with linear and nonlinear fractional partial differential equations (PDEs). However, PINNs, relying on feedforward neural networks (FNNs), overlook the crucial temporal dependencies inherent in practical physics systems. As a result, they fail to globally propagate the initial condition constraints and accurately capture the true solutions under various scenarios. In contrast, the attention mechanism offers flexible means to implicitly exploit patterns within inputs and, moreover, establish relationships between arbitrary query locations and inputs. Thus, we present an attention-based framework for PINNs, which we term PINNs-Transformer (Zhao et al., 2023). The framework was constructed using self-attention and a set of point-wise multilayer perceptrons (MLPs). The novelty is in applying the framework to the various fractional differential equations with stiff dynamics as well as their inverse formulations. We have also validated the PINNs-Transformer on two examples: one involving a fractional diffusion differential equation over time, and the other focused on identifying a space-dependent parameter associated with the direct problem described in the first example. We reinforce this finding by conducting a numerical comparison with variant of PINN methods based on criteria such as relative error, complexity, memory needs and execution time.</div></div>","PeriodicalId":48907,"journal":{"name":"Journal of Computational Science","volume":"85 ","pages":"Article 102514"},"PeriodicalIF":3.1000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Science","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877750324003077","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Physics-Informed Neural Networks (PINNs) have recently gained significant attention for their ability to solve both forward and inverse problems associated with linear and nonlinear fractional partial differential equations (PDEs). However, PINNs, relying on feedforward neural networks (FNNs), overlook the crucial temporal dependencies inherent in practical physics systems. As a result, they fail to globally propagate the initial condition constraints and accurately capture the true solutions under various scenarios. In contrast, the attention mechanism offers flexible means to implicitly exploit patterns within inputs and, moreover, establish relationships between arbitrary query locations and inputs. Thus, we present an attention-based framework for PINNs, which we term PINNs-Transformer (Zhao et al., 2023). The framework was constructed using self-attention and a set of point-wise multilayer perceptrons (MLPs). The novelty is in applying the framework to the various fractional differential equations with stiff dynamics as well as their inverse formulations. We have also validated the PINNs-Transformer on two examples: one involving a fractional diffusion differential equation over time, and the other focused on identifying a space-dependent parameter associated with the direct problem described in the first example. We reinforce this finding by conducting a numerical comparison with variant of PINN methods based on criteria such as relative error, complexity, memory needs and execution time.
期刊介绍:
Computational Science is a rapidly growing multi- and interdisciplinary field that uses advanced computing and data analysis to understand and solve complex problems. It has reached a level of predictive capability that now firmly complements the traditional pillars of experimentation and theory.
The recent advances in experimental techniques such as detectors, on-line sensor networks and high-resolution imaging techniques, have opened up new windows into physical and biological processes at many levels of detail. The resulting data explosion allows for detailed data driven modeling and simulation.
This new discipline in science combines computational thinking, modern computational methods, devices and collateral technologies to address problems far beyond the scope of traditional numerical methods.
Computational science typically unifies three distinct elements:
• Modeling, Algorithms and Simulations (e.g. numerical and non-numerical, discrete and continuous);
• Software developed to solve science (e.g., biological, physical, and social), engineering, medicine, and humanities problems;
• Computer and information science that develops and optimizes the advanced system hardware, software, networking, and data management components (e.g. problem solving environments).