The Generalized/eXtended Finite Element Method (G/XFEM) is known to efficiently and accurately solve problems that are challenging for standard methodologies. The method can deliver optimal convergence rates in the energy norm and global matrices with a scaled condition number that has the same order as in the Finite Element Method (FEM). This is achieved even for problems of Linear Elastic Fracture Mechanics (LEFM), which have solutions containing singularities and discontinuities. Despite delivering optimal convergence rates, it has been shown [1], however, that first-order G/XFEM are not competitive with second-order FEM that uses quarter-point elements, especially for three-dimensional (3-D) problems. Because of this, optimally convergent second-order G/XFEM, customized to solve LEFM problems, have been recently proposed [1, 2, 3]. The formulations presented in these works augment both standard lagrangian FEM approximation spaces [3] and p FEM approximation spaces [1, 2] in order to insert into the G/XFEM numerical approximation the discontinuous and singular behaviors of fractures. It is important to note that, in addition to using enrichment functions, G/XFEM still needs local mesh refinement around crack fronts in order to achieve optimal convergence. This must be considered especially for 3-D problems that violate the assumptions of the adopted singular enrichments. While this local mesh refinement can be easily performed for simple cases, the level of refinement
{"title":"A Posteriori Error Estimation and Adaptivity for Second-Order Optimally Convergent G/XFEM and FEM","authors":"M. E. Bento, S. Proença, C. Duarte","doi":"10.23967/admos.2023.042","DOIUrl":"https://doi.org/10.23967/admos.2023.042","url":null,"abstract":"The Generalized/eXtended Finite Element Method (G/XFEM) is known to efficiently and accurately solve problems that are challenging for standard methodologies. The method can deliver optimal convergence rates in the energy norm and global matrices with a scaled condition number that has the same order as in the Finite Element Method (FEM). This is achieved even for problems of Linear Elastic Fracture Mechanics (LEFM), which have solutions containing singularities and discontinuities. Despite delivering optimal convergence rates, it has been shown [1], however, that first-order G/XFEM are not competitive with second-order FEM that uses quarter-point elements, especially for three-dimensional (3-D) problems. Because of this, optimally convergent second-order G/XFEM, customized to solve LEFM problems, have been recently proposed [1, 2, 3]. The formulations presented in these works augment both standard lagrangian FEM approximation spaces [3] and p FEM approximation spaces [1, 2] in order to insert into the G/XFEM numerical approximation the discontinuous and singular behaviors of fractures. It is important to note that, in addition to using enrichment functions, G/XFEM still needs local mesh refinement around crack fronts in order to achieve optimal convergence. This must be considered especially for 3-D problems that violate the assumptions of the adopted singular enrichments. While this local mesh refinement can be easily performed for simple cases, the level of refinement","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"934 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116197894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ansin, F. Larsson, R. Larsson, M. Ekh, B. Pålsson
Degradation of the railhead in curved tracks caused by high lateral contact forces between wheel and rail is associated with high maintenance costs, which motivates the need for predictive methodologies. The damage mechanisms include plastic deformation, wear, and surface (or subsurface) initiated cracks due to rolling contact fatigue (RCF). Numerical computations of the long-term evolution and degradation of the rail head are computationally demanding due to a large number of load cycles, large variation in vehicle loads and wheel rim geometries. An existing framework [1] considers feed-back loops between dynamic vehicle-track interaction, elastic-plastic wheel-rail contact, and accumulated rail damage due to plasticity and surface wear to update the rail profile. In this work however, the contact simulation and the subsequent analysis of the evolution of plastic deformation is restricted to a meta-modeling strategy in 2D in order to reduce the computational cost. To increase computational efficiency, we adopt the Proper Generalized Decomposition (PGD) to solve a reduced order problem for each load cycle. In order to model the 3D contact situation, the rail cross section is modeled in 2D, while the coordinate along the rail constitutes a parameter in the PGD approximation. Furthermore, the varying contact load, predicted from dynamic train-track simulations, is parametrized in terms of spatial distribution. In addition to formulating the problem, we discuss and evaluate the accuracy and efficiency of the proposed strategy through a set of verification examples for elastic contact under varying traffic loads. Finally, we also discuss the outlook towards elastic-plastic simulations.
{"title":"Fast Simulation of Wheel-Rail Contact Using Proper Generalized Decomposition","authors":"C. Ansin, F. Larsson, R. Larsson, M. Ekh, B. Pålsson","doi":"10.23967/admos.2023.073","DOIUrl":"https://doi.org/10.23967/admos.2023.073","url":null,"abstract":"Degradation of the railhead in curved tracks caused by high lateral contact forces between wheel and rail is associated with high maintenance costs, which motivates the need for predictive methodologies. The damage mechanisms include plastic deformation, wear, and surface (or subsurface) initiated cracks due to rolling contact fatigue (RCF). Numerical computations of the long-term evolution and degradation of the rail head are computationally demanding due to a large number of load cycles, large variation in vehicle loads and wheel rim geometries. An existing framework [1] considers feed-back loops between dynamic vehicle-track interaction, elastic-plastic wheel-rail contact, and accumulated rail damage due to plasticity and surface wear to update the rail profile. In this work however, the contact simulation and the subsequent analysis of the evolution of plastic deformation is restricted to a meta-modeling strategy in 2D in order to reduce the computational cost. To increase computational efficiency, we adopt the Proper Generalized Decomposition (PGD) to solve a reduced order problem for each load cycle. In order to model the 3D contact situation, the rail cross section is modeled in 2D, while the coordinate along the rail constitutes a parameter in the PGD approximation. Furthermore, the varying contact load, predicted from dynamic train-track simulations, is parametrized in terms of spatial distribution. In addition to formulating the problem, we discuss and evaluate the accuracy and efficiency of the proposed strategy through a set of verification examples for elastic contact under varying traffic loads. Finally, we also discuss the outlook towards elastic-plastic simulations.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123401609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdullah Abdulhaque¹, Trond Kvamsdal, Mukesh Kumar, A. Kvarving
In this article, we study a special benchmark problem for the Boussinesq equations. This is the Navier-Stokes equations coupled with the Advection-Diffusion equation, and it is used for modelling buoyancy-driven flow. The solution process is mixed isogeometric discretization combined with adaptive mesh refinement [4]. We discretize the equation system with the recently proposed isogeometric versions of the Taylor-Hood, Sub-Grid and Raviart-Thomas elements [1]. The adaptive refinement is based on LR B-splines [2] and recovery estimators [3]. We investigate the suitability of our adaptive methods for Rayleigh numbers in the range 10 1 -10 5 , by comparing with high-resolution reference solution.
{"title":"Adaptive mixed isogeometric analysis of a highly convective benchmark problem for the Boussinesq equations","authors":"Abdullah Abdulhaque¹, Trond Kvamsdal, Mukesh Kumar, A. Kvarving","doi":"10.23967/admos.2023.045","DOIUrl":"https://doi.org/10.23967/admos.2023.045","url":null,"abstract":"In this article, we study a special benchmark problem for the Boussinesq equations. This is the Navier-Stokes equations coupled with the Advection-Diffusion equation, and it is used for modelling buoyancy-driven flow. The solution process is mixed isogeometric discretization combined with adaptive mesh refinement [4]. We discretize the equation system with the recently proposed isogeometric versions of the Taylor-Hood, Sub-Grid and Raviart-Thomas elements [1]. The adaptive refinement is based on LR B-splines [2] and recovery estimators [3]. We investigate the suitability of our adaptive methods for Rayleigh numbers in the range 10 1 -10 5 , by comparing with high-resolution reference solution.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128604833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an efficient hp-adaptive discretization for sharp interface simulations of compressible two-phase flows using the level-set ghost fluid method. The discretization employs a high order p-adaptive Discontinuous Galerkin (DG) scheme in regions of high regularity, whereas discontinuities are captured by a more robust Finite Volume (FV) scheme on an element-local sub-grid. The h-refinement strategy effectively carries over the subscale resolution capability of the DG scheme to shocks and the phase interface, while preserving an essentially non-oscillatory behavior of the solution. The p-refinement and the FV-limiting are controlled by a common indicator that evaluates the modal decay of the solution polynomials. The resulting adaptive hybrid DG/FV operator is used for the governing equations of both, the fluid flow and the level-set transport. However, the hp-adaptive discretization, together with solving the computationally expensive level-set equations only in the vicinity of the phase interface, causes pronounced variations in the element costs throughout the domain. In parallel computations, these variations imply a significant workload imbalance among the processor units. To ensure parallel scalability, the proposed discretization thus needs to be complemented by a dynamic load balancing (DLB) approach. We introduce a DLB scheme that determines the current workload distribution accurately through element-local walltime measurements and repartitions the elements efficiently along a space-filling curve. We provide strong scaling results to underline the parallel efficiency of the presented hp-adaptive sharp interface framework. Moreover, complex benchmark problems demonstrate that it handles efficiently and accurately the inherent multiscale physics of compressible two-phase flows.
{"title":"An Efficient hp-Adaptive Approach for Compressible Two-Phase Flows using the Level-Set Ghost Fluid Method","authors":"P. Mossier, D. Appel, A. Beck, C. Munz","doi":"10.23967/admos.2023.039","DOIUrl":"https://doi.org/10.23967/admos.2023.039","url":null,"abstract":"We present an efficient hp-adaptive discretization for sharp interface simulations of compressible two-phase flows using the level-set ghost fluid method. The discretization employs a high order p-adaptive Discontinuous Galerkin (DG) scheme in regions of high regularity, whereas discontinuities are captured by a more robust Finite Volume (FV) scheme on an element-local sub-grid. The h-refinement strategy effectively carries over the subscale resolution capability of the DG scheme to shocks and the phase interface, while preserving an essentially non-oscillatory behavior of the solution. The p-refinement and the FV-limiting are controlled by a common indicator that evaluates the modal decay of the solution polynomials. The resulting adaptive hybrid DG/FV operator is used for the governing equations of both, the fluid flow and the level-set transport. However, the hp-adaptive discretization, together with solving the computationally expensive level-set equations only in the vicinity of the phase interface, causes pronounced variations in the element costs throughout the domain. In parallel computations, these variations imply a significant workload imbalance among the processor units. To ensure parallel scalability, the proposed discretization thus needs to be complemented by a dynamic load balancing (DLB) approach. We introduce a DLB scheme that determines the current workload distribution accurately through element-local walltime measurements and repartitions the elements efficiently along a space-filling curve. We provide strong scaling results to underline the parallel efficiency of the presented hp-adaptive sharp interface framework. Moreover, complex benchmark problems demonstrate that it handles efficiently and accurately the inherent multiscale physics of compressible two-phase flows.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mustafa Aggul, Yasasya Batugedara, A. Labovsky, Eda Onal, J. Kyle, Schwiebert
The large eddy simulation (LES) models for incompressible flow have found wide application in computational fluid dynamics (CFD), including areas relevant to aeronautics such as computing drag and lift coefficients and fluid-structure interaction problems [1, 2]. LES models have also found application in climate science through modeling fluid-fluid (atmosphere-ocean) problems. Large eddy simulation with correction (LES-C) turbulence models, introduced in 2020, are a new class of turbulence models which rely on defect correction to build a high-accuracy turbulence model on top of any existing LES model [3, 4, 5]. LES-C models have two additional benefits worth serious consideration. First, LES-C models are easy to run in parallel: One processor can compute the defect (LES) solution, while the other processor computes the LES-C solution. Thus, if one has access to a machine with more than one computational core (essentially ubiquitous in modern architectures), the improved solution comes at nearly no cost in terms of the “wall time” it takes a simulation to complete. Second, LES-C models readily lend themselves to coupling with other
不可压缩流动的大涡模拟(LES)模型在计算流体动力学(CFD)中得到了广泛的应用,包括与航空相关的领域,如计算阻力和升力系数以及流固耦合问题[1,2]。LES模式还通过模拟流体-流体(大气-海洋)问题在气候科学中得到应用。大涡模拟校正(Large eddy simulation with correction, LES- c)湍流模型是2020年推出的一类新的湍流模型,它依靠缺陷校正在现有的LES模型之上建立高精度的湍流模型[3,4,5]。LES-C模型还有两个值得认真考虑的额外好处。首先,LES- c模型很容易并行运行:一个处理器可以计算缺陷(LES)解决方案,而另一个处理器计算LES- c解决方案。因此,如果您可以访问具有多个计算核心的机器(在现代体系结构中基本上无处不在),则改进的解决方案几乎不需要花费模拟完成的“墙时间”。其次,LES-C模型很容易与其他模型耦合
{"title":"Verifying and applying LES-C turbulence models for turbulent incompressible flow and fluid-fluid interaction problems","authors":"Mustafa Aggul, Yasasya Batugedara, A. Labovsky, Eda Onal, J. Kyle, Schwiebert","doi":"10.23967/admos.2023.040","DOIUrl":"https://doi.org/10.23967/admos.2023.040","url":null,"abstract":"The large eddy simulation (LES) models for incompressible flow have found wide application in computational fluid dynamics (CFD), including areas relevant to aeronautics such as computing drag and lift coefficients and fluid-structure interaction problems [1, 2]. LES models have also found application in climate science through modeling fluid-fluid (atmosphere-ocean) problems. Large eddy simulation with correction (LES-C) turbulence models, introduced in 2020, are a new class of turbulence models which rely on defect correction to build a high-accuracy turbulence model on top of any existing LES model [3, 4, 5]. LES-C models have two additional benefits worth serious consideration. First, LES-C models are easy to run in parallel: One processor can compute the defect (LES) solution, while the other processor computes the LES-C solution. Thus, if one has access to a machine with more than one computational core (essentially ubiquitous in modern architectures), the improved solution comes at nearly no cost in terms of the “wall time” it takes a simulation to complete. Second, LES-C models readily lend themselves to coupling with other","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"132 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130891847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Roth, H. Fischer, J. Thiele, U. Köcher, A. Fau, L. Chamoin, T. Wick
In this presentation, we present a uniform framework in which the dual-weighted residual (DWR) method is used for spatial and temporal discretization error control [1], as well as the control of the reduced order modeling error for the proper orthogonal decomposition (POD). In the first part of this presentation, the DWR method is applied to a space-time formulation of non-stationary Navier-Stokes flow. Tensor-product space-time finite elements are being used to discretize the variational formulation with discontinuous Galerkin finite elements in time and inf-sup stable Taylor-Hood finite element pairs in space. To estimate the error in a quantity of interest and drive adaptive refinement in time and space, we demonstrate how the DWR method for incompressible flow [2] can be extended to a partition of unity based error localization [3, 4]. Our methodology is being substantiated on the two dimensional flow around a cylinder benchmark problem. In the second
{"title":"Space-Time Goal Oriented Error Estimation and Adaptivity for Discretization and Reduced Order Modeling Errors","authors":"J. Roth, H. Fischer, J. Thiele, U. Köcher, A. Fau, L. Chamoin, T. Wick","doi":"10.23967/admos.2023.026","DOIUrl":"https://doi.org/10.23967/admos.2023.026","url":null,"abstract":"In this presentation, we present a uniform framework in which the dual-weighted residual (DWR) method is used for spatial and temporal discretization error control [1], as well as the control of the reduced order modeling error for the proper orthogonal decomposition (POD). In the first part of this presentation, the DWR method is applied to a space-time formulation of non-stationary Navier-Stokes flow. Tensor-product space-time finite elements are being used to discretize the variational formulation with discontinuous Galerkin finite elements in time and inf-sup stable Taylor-Hood finite element pairs in space. To estimate the error in a quantity of interest and drive adaptive refinement in time and space, we demonstrate how the DWR method for incompressible flow [2] can be extended to a partition of unity based error localization [3, 4]. Our methodology is being substantiated on the two dimensional flow around a cylinder benchmark problem. In the second","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128171550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A posteriori error estimates of elliptic and parabolic equations for the weak Galerkin finite element methods","authors":"Y. Nie, Y. Liu","doi":"10.23967/admos.2023.043","DOIUrl":"https://doi.org/10.23967/admos.2023.043","url":null,"abstract":"","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125805322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is well known that while the meshless smoothed particles hydrodynamics (SPH) technique is often advantageous in modelling scenarios involving extreme deformations and fragmentation, the finite element method (FEM) in its Lagrangian implementation is wellsuited for tracking the materials' interfaces. To use the advantages of both techniques simultaneously, an adaptive FEM/SPH approach can be employed. In this method, the local and adaptive transformation of Lagrangian solid elements to SPH particles is triggered by erosion of the solid elements when they become highly distorted and inefficient. The SPH particles replacing the eroded solid elements inherit all the nodal and integration point quantities of the original solids and initiated being attached to the neighbouring solid elements. LS-DYNA implementation of this technique was adopted in this study for the solution of two problems: (1) turbofan engine blade rub against the engine’s fancase; (2) collision of an orbital debris particle with a sandwich panel of a spacecraft bus;. For the first problem, predictions of the adaptive technique are compared with those obtained using FEMonly and SPH-only models. For the second problem, a comparison of the numerical and experimental results is provided. The study highlights advantages and limitations of the adaptive modelling methodology.
{"title":"The use of adaptive FEM-SPH technique in high-velocity impact simulations","authors":"A. Cherniaev","doi":"10.23967/admos.2023.052","DOIUrl":"https://doi.org/10.23967/admos.2023.052","url":null,"abstract":"It is well known that while the meshless smoothed particles hydrodynamics (SPH) technique is often advantageous in modelling scenarios involving extreme deformations and fragmentation, the finite element method (FEM) in its Lagrangian implementation is wellsuited for tracking the materials' interfaces. To use the advantages of both techniques simultaneously, an adaptive FEM/SPH approach can be employed. In this method, the local and adaptive transformation of Lagrangian solid elements to SPH particles is triggered by erosion of the solid elements when they become highly distorted and inefficient. The SPH particles replacing the eroded solid elements inherit all the nodal and integration point quantities of the original solids and initiated being attached to the neighbouring solid elements. LS-DYNA implementation of this technique was adopted in this study for the solution of two problems: (1) turbofan engine blade rub against the engine’s fancase; (2) collision of an orbital debris particle with a sandwich panel of a spacecraft bus;. For the first problem, predictions of the adaptive technique are compared with those obtained using FEMonly and SPH-only models. For the second problem, a comparison of the numerical and experimental results is provided. The study highlights advantages and limitations of the adaptive modelling methodology.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115240603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Multiscale Finite Element Method (MsFEM) is a Finite Element type approximation method for multiscale problems, where the basis functions used to generate the approximation space are precomputed as solutions to problems posed on local elements and ressembling the global problem of interest. These basis functions are thus specifically adapted to the problem at hand. Once these local basis functions have been computed, a standard Galerkin approximation of the global problem is performed. Many ways to define these basis functions have been proposed in the literature over the past years. While a priori error estimates have been established for all these variants, a posteriori estimates are much less frequent and we refer e.g. to [1, 2] for some contribution in that direction. In this work, we introduce and analyze a specific MsFEM variant, the construction of which is inspired by component mode synthesis techniques. In particular, we enrich the standard MsFEM basis set by highly oscillatory basis functions that are solutions to local equilibrium problems and satisfy Dirichlet boundary conditions (on the boundary of the local elements) given by (possibly high order) polynomials. After having discussed the performance of this new approach, we present a posteriori error estimates that are useful to appropriately choose the degrees of the polynomial functions used as boundary conditions on each edge of the coarse mesh. This work [3] is joint with U. Hetmaniuk
{"title":"Multiscale Finite Element approaches: error estimations and adaptivity for an enriched variant","authors":"F. Legoll","doi":"10.23967/admos.2023.031","DOIUrl":"https://doi.org/10.23967/admos.2023.031","url":null,"abstract":"The Multiscale Finite Element Method (MsFEM) is a Finite Element type approximation method for multiscale problems, where the basis functions used to generate the approximation space are precomputed as solutions to problems posed on local elements and ressembling the global problem of interest. These basis functions are thus specifically adapted to the problem at hand. Once these local basis functions have been computed, a standard Galerkin approximation of the global problem is performed. Many ways to define these basis functions have been proposed in the literature over the past years. While a priori error estimates have been established for all these variants, a posteriori estimates are much less frequent and we refer e.g. to [1, 2] for some contribution in that direction. In this work, we introduce and analyze a specific MsFEM variant, the construction of which is inspired by component mode synthesis techniques. In particular, we enrich the standard MsFEM basis set by highly oscillatory basis functions that are solutions to local equilibrium problems and satisfy Dirichlet boundary conditions (on the boundary of the local elements) given by (possibly high order) polynomials. After having discussed the performance of this new approach, we present a posteriori error estimates that are useful to appropriately choose the degrees of the polynomial functions used as boundary conditions on each edge of the coarse mesh. This work [3] is joint with U. Hetmaniuk","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116599427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Use of Neural Networks for Inverse Problems","authors":"L. Herrmann, T. Bürchner, S. Kollmannsberger","doi":"10.23967/admos.2023.018","DOIUrl":"https://doi.org/10.23967/admos.2023.018","url":null,"abstract":"","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122835556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}