{"title":"Unconditional energy stability and temporal convergence of first-order numerical scheme for the square phase-field crystal model","authors":"Guomei Zhao, Shuaifei Hu, P. Zhu","doi":"10.2139/ssrn.4359797","DOIUrl":"https://doi.org/10.2139/ssrn.4359797","url":null,"abstract":"","PeriodicalId":10572,"journal":{"name":"Comput. Math. Appl.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85937158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-02DOI: 10.48550/arXiv.2212.01418
W. I. Uy, D. Hartmann, B. Peherstorfer
Data-driven modeling has become a key building block in computational science and engineering. However, data that are available in science and engineering are typically scarce, often polluted with noise and affected by measurement errors and other perturbations, which makes learning the dynamics of systems challenging. In this work, we propose to combine data-driven modeling via operator inference with the dynamic training via roll outs of neural ordinary differential equations. Operator inference with roll outs inherits interpretability, scalability, and structure preservation of traditional operator inference while leveraging the dynamic training via roll outs over multiple time steps to increase stability and robustness for learning from low-quality and noisy data. Numerical experiments with data describing shallow water waves and surface quasi-geostrophic dynamics demonstrate that operator inference with roll outs provides predictive models from training trajectories even if data are sampled sparsely in time and polluted with noise of up to 10%.
{"title":"Operator inference with roll outs for learning reduced models from scarce and low-quality data","authors":"W. I. Uy, D. Hartmann, B. Peherstorfer","doi":"10.48550/arXiv.2212.01418","DOIUrl":"https://doi.org/10.48550/arXiv.2212.01418","url":null,"abstract":"Data-driven modeling has become a key building block in computational science and engineering. However, data that are available in science and engineering are typically scarce, often polluted with noise and affected by measurement errors and other perturbations, which makes learning the dynamics of systems challenging. In this work, we propose to combine data-driven modeling via operator inference with the dynamic training via roll outs of neural ordinary differential equations. Operator inference with roll outs inherits interpretability, scalability, and structure preservation of traditional operator inference while leveraging the dynamic training via roll outs over multiple time steps to increase stability and robustness for learning from low-quality and noisy data. Numerical experiments with data describing shallow water waves and surface quasi-geostrophic dynamics demonstrate that operator inference with roll outs provides predictive models from training trajectories even if data are sampled sparsely in time and polluted with noise of up to 10%.","PeriodicalId":10572,"journal":{"name":"Comput. Math. Appl.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80811781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hugo S. Tavares, B. Magacho, L. Moriconi, J. Loureiro
We propose a single-step simplified lattice Boltzmann algorithm capable of performing magnetohydrodynamic (MHD) flow simulations in pipes for very small values of magnetic Reynolds numbers $R_m$. In some previous works, most lattice Boltzmann simulations are performed with values of $R_m$ close to the Reynolds numbers for flows in simplified rectangular geometries. One of the reasons is the limitation of some traditional lattice Boltzmann algorithms in dealing with situations involving very small magnetic diffusion time scales associated with most industrial applications in MHD, which require the use of the so-called quasi-static (QS) approximation. Another reason is related to the significant dependence that many boundary conditions methods for lattice Boltzmann have on the relaxation time parameter. In this work, to overcome the mentioned limitations, we introduce an improved simplified algorithm for velocity and magnetic fields which is able to directly solve the equations of the QS approximation, among other systems, without preconditioning procedures. In these algorithms, the effects of solid insulating boundaries are included by using an improved explicit immersed boundary algorithm, whose accuracy is not affected by the values of $R_m$. Some validations with classic benchmarks and the analysis of the energy balance in examples including uniform and non-uniform magnetic fields are shown in this work. Furthermore, a progressive transition between the scenario described by the QS approximation and the MHD canonical equations in pipe flows is visualized by studying the evolution of the magnetic energy balance in examples with unsteady flows.
{"title":"A simplified lattice Boltzmann implementation of the quasi-static approximation in pipe flows under the presence of non-uniform magnetic fields","authors":"Hugo S. Tavares, B. Magacho, L. Moriconi, J. Loureiro","doi":"10.2139/ssrn.4368194","DOIUrl":"https://doi.org/10.2139/ssrn.4368194","url":null,"abstract":"We propose a single-step simplified lattice Boltzmann algorithm capable of performing magnetohydrodynamic (MHD) flow simulations in pipes for very small values of magnetic Reynolds numbers $R_m$. In some previous works, most lattice Boltzmann simulations are performed with values of $R_m$ close to the Reynolds numbers for flows in simplified rectangular geometries. One of the reasons is the limitation of some traditional lattice Boltzmann algorithms in dealing with situations involving very small magnetic diffusion time scales associated with most industrial applications in MHD, which require the use of the so-called quasi-static (QS) approximation. Another reason is related to the significant dependence that many boundary conditions methods for lattice Boltzmann have on the relaxation time parameter. In this work, to overcome the mentioned limitations, we introduce an improved simplified algorithm for velocity and magnetic fields which is able to directly solve the equations of the QS approximation, among other systems, without preconditioning procedures. In these algorithms, the effects of solid insulating boundaries are included by using an improved explicit immersed boundary algorithm, whose accuracy is not affected by the values of $R_m$. Some validations with classic benchmarks and the analysis of the energy balance in examples including uniform and non-uniform magnetic fields are shown in this work. Furthermore, a progressive transition between the scenario described by the QS approximation and the MHD canonical equations in pipe flows is visualized by studying the evolution of the magnetic energy balance in examples with unsteady flows.","PeriodicalId":10572,"journal":{"name":"Comput. Math. Appl.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78481699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-30DOI: 10.48550/arXiv.2210.16945
F. Mojarrad, M. H. Veiga, J. Hesthaven, Philipp Öffner
The choice of the shape parameter highly effects the behaviour of radial basis function (RBF) approximations, as it needs to be selected to balance between ill-condition of the interpolation matrix and high accuracy. In this paper, we demonstrate how to use neural networks to determine the shape parameters in RBFs. In particular, we construct a multilayer perceptron trained using an unsupervised learning strategy, and use it to predict shape parameters for inverse multiquadric and Gaussian kernels. We test the neural network approach in RBF interpolation tasks and in a RBF-finite difference method in one and two-space dimensions, demonstrating promising results.
{"title":"A new variable shape parameter strategy for RBF approximation using neural networks","authors":"F. Mojarrad, M. H. Veiga, J. Hesthaven, Philipp Öffner","doi":"10.48550/arXiv.2210.16945","DOIUrl":"https://doi.org/10.48550/arXiv.2210.16945","url":null,"abstract":"The choice of the shape parameter highly effects the behaviour of radial basis function (RBF) approximations, as it needs to be selected to balance between ill-condition of the interpolation matrix and high accuracy. In this paper, we demonstrate how to use neural networks to determine the shape parameters in RBFs. In particular, we construct a multilayer perceptron trained using an unsupervised learning strategy, and use it to predict shape parameters for inverse multiquadric and Gaussian kernels. We test the neural network approach in RBF interpolation tasks and in a RBF-finite difference method in one and two-space dimensions, demonstrating promising results.","PeriodicalId":10572,"journal":{"name":"Comput. Math. Appl.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87731352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-13DOI: 10.48550/arXiv.2209.05844
Tomasz Sluzalec, R. Grzeszczuk, Sergio Rojas, W. Dzwinel, M. Paszyński
We show how to construct the deep neural network (DNN) expert to predict quasi-optimal $hp$-refinements for a given computational problem. The main idea is to train the DNN expert during executing the self-adaptive $hp$-finite element method ($hp$-FEM) algorithm and use it later to predict further $hp$ refinements. For the training, we use a two-grid paradigm self-adaptive $hp$-FEM algorithm. It employs the fine mesh to provide the optimal $hp$ refinements for coarse mesh elements. We aim to construct the DNN expert to identify quasi-optimal $hp$ refinements of the coarse mesh elements. During the training phase, we use the direct solver to obtain the solution for the fine mesh to guide the optimal refinements over the coarse mesh element. After training, we turn off the self-adaptive $hp$-FEM algorithm and continue with quasi-optimal refinements as proposed by the DNN expert trained. We test our method on three-dimensional Fichera and two-dimensional L-shaped domain problems. We verify the convergence of the numerical accuracy with respect to the mesh size. We show that the exponential convergence delivered by the self-adaptive $hp$-FEM can be preserved if we continue refinements with a properly trained DNN expert. Thus, in this paper, we show that from the self-adaptive $hp$-FEM it is possible to train the DNN expert the location of the singularities, and continue with the selection of the quasi-optimal $hp$ refinements, preserving the exponential convergence of the method.
{"title":"Quasi-optimal hp-finite element refinements towards singularities via deep neural network prediction","authors":"Tomasz Sluzalec, R. Grzeszczuk, Sergio Rojas, W. Dzwinel, M. Paszyński","doi":"10.48550/arXiv.2209.05844","DOIUrl":"https://doi.org/10.48550/arXiv.2209.05844","url":null,"abstract":"We show how to construct the deep neural network (DNN) expert to predict quasi-optimal $hp$-refinements for a given computational problem. The main idea is to train the DNN expert during executing the self-adaptive $hp$-finite element method ($hp$-FEM) algorithm and use it later to predict further $hp$ refinements. For the training, we use a two-grid paradigm self-adaptive $hp$-FEM algorithm. It employs the fine mesh to provide the optimal $hp$ refinements for coarse mesh elements. We aim to construct the DNN expert to identify quasi-optimal $hp$ refinements of the coarse mesh elements. During the training phase, we use the direct solver to obtain the solution for the fine mesh to guide the optimal refinements over the coarse mesh element. After training, we turn off the self-adaptive $hp$-FEM algorithm and continue with quasi-optimal refinements as proposed by the DNN expert trained. We test our method on three-dimensional Fichera and two-dimensional L-shaped domain problems. We verify the convergence of the numerical accuracy with respect to the mesh size. We show that the exponential convergence delivered by the self-adaptive $hp$-FEM can be preserved if we continue refinements with a properly trained DNN expert. Thus, in this paper, we show that from the self-adaptive $hp$-FEM it is possible to train the DNN expert the location of the singularities, and continue with the selection of the quasi-optimal $hp$ refinements, preserving the exponential convergence of the method.","PeriodicalId":10572,"journal":{"name":"Comput. Math. Appl.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80819358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-04DOI: 10.48550/arXiv.2209.01528
Jiansong Zhang, Yun-Wey Yu, Jiang Zhu, Yue Yu, R. Qin
Wormhole propagation plays a very important role in the product enhancement of oil and gas reservoir. A new combined hybrid mixed finite element method is proposed to solve incompressible wormhole propagation problem with discontinuous Galerkin finite element procedure, in which, the new hybrid mixed finite element algorithm is established for pressure equation, while the discontinuous Galerkin finite element method is considered for concentration equation, and then the porosity function is computed straightly by the approximate value of the concentration. This new combined method can keep local mass balance, meantime it also keeps the boundedness of the porosity. The convergence of the proposed method is analyzed and the optimal error estimate is derived. Finally, numerical examples are presented to verify the validity of the algorithm and the correctness of the theoretical results.
{"title":"Hybrid mixed discontinuous Galerkin finite element method for incompressible wormhole propagation problem","authors":"Jiansong Zhang, Yun-Wey Yu, Jiang Zhu, Yue Yu, R. Qin","doi":"10.48550/arXiv.2209.01528","DOIUrl":"https://doi.org/10.48550/arXiv.2209.01528","url":null,"abstract":"Wormhole propagation plays a very important role in the product enhancement of oil and gas reservoir. A new combined hybrid mixed finite element method is proposed to solve incompressible wormhole propagation problem with discontinuous Galerkin finite element procedure, in which, the new hybrid mixed finite element algorithm is established for pressure equation, while the discontinuous Galerkin finite element method is considered for concentration equation, and then the porosity function is computed straightly by the approximate value of the concentration. This new combined method can keep local mass balance, meantime it also keeps the boundedness of the porosity. The convergence of the proposed method is analyzed and the optimal error estimate is derived. Finally, numerical examples are presented to verify the validity of the algorithm and the correctness of the theoretical results.","PeriodicalId":10572,"journal":{"name":"Comput. Math. Appl.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87592740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-22DOI: 10.48550/arXiv.2208.10143
Valentin Helml, M. Innerberger, D. Praetorius
We discuss goal-oriented adaptivity in the frame of conforming finite element methods and plain convergence of the related a posteriori error estimator for different general marking strategies. We present an abstract analysis for two different settings. First, we consider problems where a local discrete efficiency estimate holds. Second, we show plain convergence in a setting that relies only on structural properties of the error estimators, namely stability on non-refined elements as well as reduction on refined elements. In particular, the second setting does not require reliability and efficiency estimates. Numerical experiments underline our theoretical findings.
{"title":"Plain convergence of goal-oriented adaptive FEM","authors":"Valentin Helml, M. Innerberger, D. Praetorius","doi":"10.48550/arXiv.2208.10143","DOIUrl":"https://doi.org/10.48550/arXiv.2208.10143","url":null,"abstract":"We discuss goal-oriented adaptivity in the frame of conforming finite element methods and plain convergence of the related a posteriori error estimator for different general marking strategies. We present an abstract analysis for two different settings. First, we consider problems where a local discrete efficiency estimate holds. Second, we show plain convergence in a setting that relies only on structural properties of the error estimators, namely stability on non-refined elements as well as reduction on refined elements. In particular, the second setting does not require reliability and efficiency estimates. Numerical experiments underline our theoretical findings.","PeriodicalId":10572,"journal":{"name":"Comput. Math. Appl.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81400718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}