{"title":"A randomized neural network based Petrov–Galerkin method for approximating the solution of fractional order boundary value problems","authors":"John P. Roop","doi":"10.1016/j.rinam.2024.100493","DOIUrl":null,"url":null,"abstract":"<div><p>This article presents the implementation of a randomized neural network (RNN) approach in approximating the solution of fractional order boundary value problems using a Petrov–Galerkin framework with Lagrange basis test functions. Traditional methods, like Physics Informed Neural Networks (PINNs), use standard deep learning techniques, which suffer from a computational bottleneck. In contrast, RNNs offer an alternative by employing a random structure with random coefficients, only solving for the output layer. We allow for the application of numerical analysis principles by using RNNs as trial functions and piecewise Lagrange polynomials as test functions. The article covers the construction and properties of the RNN basis, the definition and solution of fractional boundary value problems, and the implementation of the RNN Petrov–Galerkin method. We derive the stiffness matrix and solve it using least squares. Error analysis shows that the method meets the requirements of the Lax–Milgram lemma along with a Ceá inequality, ensuring optimal error estimates, depending on the regularity of the exact solution. Computational experiments demonstrate the method’s efficacy, including multiples cases with both regular and irregular solutions. The results highlight the utility of RNN-based Petrov–Galerkin methods in solving fractional differential equations with experimental convergence.</p></div>","PeriodicalId":36918,"journal":{"name":"Results in Applied Mathematics","volume":"23 ","pages":"Article 100493"},"PeriodicalIF":1.4000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2590037424000633/pdfft?md5=aeb3e87fe577f0dbe2c437bee7de39b5&pid=1-s2.0-S2590037424000633-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Results in Applied Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590037424000633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
This article presents the implementation of a randomized neural network (RNN) approach in approximating the solution of fractional order boundary value problems using a Petrov–Galerkin framework with Lagrange basis test functions. Traditional methods, like Physics Informed Neural Networks (PINNs), use standard deep learning techniques, which suffer from a computational bottleneck. In contrast, RNNs offer an alternative by employing a random structure with random coefficients, only solving for the output layer. We allow for the application of numerical analysis principles by using RNNs as trial functions and piecewise Lagrange polynomials as test functions. The article covers the construction and properties of the RNN basis, the definition and solution of fractional boundary value problems, and the implementation of the RNN Petrov–Galerkin method. We derive the stiffness matrix and solve it using least squares. Error analysis shows that the method meets the requirements of the Lax–Milgram lemma along with a Ceá inequality, ensuring optimal error estimates, depending on the regularity of the exact solution. Computational experiments demonstrate the method’s efficacy, including multiples cases with both regular and irregular solutions. The results highlight the utility of RNN-based Petrov–Galerkin methods in solving fractional differential equations with experimental convergence.