D. van Huyssteen, F. López-Rivarola, G. Etse, P. Steinmann
{"title":"Adaptive mesh refinement procedures for the virtual element method","authors":"D. van Huyssteen, F. López-Rivarola, G. Etse, P. Steinmann","doi":"10.23967/admos.2023.064","DOIUrl":"https://doi.org/10.23967/admos.2023.064","url":null,"abstract":"","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126349525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerical methods for random parametric PDEs can greatly benefit from adaptive refinement schemes, in particular when functional approximations are computed as in stochastic Galerkin methods with residual based error estimation. From the mathematical side, especially when the coefficients of the PDE are unbounded, solvability is difficult to prove and numerical approximations face numerous challenges. In this talk we generalize the adaptive refinement scheme for elliptic parametric PDEs introduced in [1, 2] to unbounded (lognormal) diffusion coefficients [3]. The algorithm is guided by a reliable error estimator which steers both the refinement of the spacial finite element mesh and the enlargement of the stochastic approximation space. As the algorithm relies solely on (a sufficiently good approximation of) the Galerkin projection of the PDE solution and the PDE coefficient, it can be used in a non-intrusively manner, allowing for applications in many different settings. We prove that the proposed algorithm converges and even show evidence that similar convergence rates as for intrusive approaches can be observed.
{"title":"A Convergence Proof for Adaptive Parametric PDEs with Unbounded Coefficients","authors":"N. Farchmin, M. Eigel","doi":"10.23967/admos.2023.005","DOIUrl":"https://doi.org/10.23967/admos.2023.005","url":null,"abstract":"Numerical methods for random parametric PDEs can greatly benefit from adaptive refinement schemes, in particular when functional approximations are computed as in stochastic Galerkin methods with residual based error estimation. From the mathematical side, especially when the coefficients of the PDE are unbounded, solvability is difficult to prove and numerical approximations face numerous challenges. In this talk we generalize the adaptive refinement scheme for elliptic parametric PDEs introduced in [1, 2] to unbounded (lognormal) diffusion coefficients [3]. The algorithm is guided by a reliable error estimator which steers both the refinement of the spacial finite element mesh and the enlargement of the stochastic approximation space. As the algorithm relies solely on (a sufficiently good approximation of) the Galerkin projection of the PDE solution and the PDE coefficient, it can be used in a non-intrusively manner, allowing for applications in many different settings. We prove that the proposed algorithm converges and even show evidence that similar convergence rates as for intrusive approaches can be observed.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129286831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Wackers, Hayriye Pehlivan Solak, Riccardo, Pellegrini, A. Serani, M. Diez
Simulation-driven shape optimization often uses surrogate models, i.e. approximate models fitted through a dataset of simulation results for a limited number of designs. The shape optimization is then performed over this surrogate model. For efficiency, modern approaches often construct the datasets adaptively, adding simulation points one by one where they are most likely to discover the optimum design [3]. The uncertainty estimation of the surrogate model is essential to guide the choice of new sample points: underestimation of the uncertainty leads to sampling in suboptimal regions, missing the true optimum. Gaussian process regression naturally provides uncertainty estimations [4] and Stochastic Radial Basis Functions (SRBF) surrogate models estimate the uncertainty based on the spread of RBF fits with different kernels [5]. In the context of SRBF, this paper discusses two issues with uncertainty estimation. The first is that most existing techniques rely on knowledge about the global behaviour of the data, such as spatial correlations. However, the number of datapoints can be too small to reconstruct this global information from the data. We argue that in this situation, user-provided estimation of the function behaviour is a better choice (section 3). The second issue is that the dataset may contain noise, i.e. random errors without spatial correlation. Surrogate models can filter out this noise, but it introduces two separate uncertainties: the optimum amount of noise filtering is unknown, and for a small dataset (even with perfect noise filtering) the local mean of the data may not correspond to the true simulation response. In section 4 we introduce estimators for both uncertainties.
{"title":"Error estimation for surrogate models with noisy small-sized training sets","authors":"J. Wackers, Hayriye Pehlivan Solak, Riccardo, Pellegrini, A. Serani, M. Diez","doi":"10.23967/admos.2023.007","DOIUrl":"https://doi.org/10.23967/admos.2023.007","url":null,"abstract":"Simulation-driven shape optimization often uses surrogate models, i.e. approximate models fitted through a dataset of simulation results for a limited number of designs. The shape optimization is then performed over this surrogate model. For efficiency, modern approaches often construct the datasets adaptively, adding simulation points one by one where they are most likely to discover the optimum design [3]. The uncertainty estimation of the surrogate model is essential to guide the choice of new sample points: underestimation of the uncertainty leads to sampling in suboptimal regions, missing the true optimum. Gaussian process regression naturally provides uncertainty estimations [4] and Stochastic Radial Basis Functions (SRBF) surrogate models estimate the uncertainty based on the spread of RBF fits with different kernels [5]. In the context of SRBF, this paper discusses two issues with uncertainty estimation. The first is that most existing techniques rely on knowledge about the global behaviour of the data, such as spatial correlations. However, the number of datapoints can be too small to reconstruct this global information from the data. We argue that in this situation, user-provided estimation of the function behaviour is a better choice (section 3). The second issue is that the dataset may contain noise, i.e. random errors without spatial correlation. Surrogate models can filter out this noise, but it introduces two separate uncertainties: the optimum amount of noise filtering is unknown, and for a small dataset (even with perfect noise filtering) the local mean of the data may not correspond to the true simulation response. In section 4 we introduce estimators for both uncertainties.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125003734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work aims at describing dimensionality reduction methods, particularizing in Principal Component Analysis (PCA), the nonlinear version kernel Principal Component Analysis (kPCA) [1], and their potential application to data-assisted Credible models in biomechanics and tissue engineering. These methodologies are intended to discover the low dimensional manifold where an input physical data set lives. Reducing the dimensionality of a complex physical system is a potential tool towards real time Credible and accurate parametric models and patient-specific simulations. In this direction, the Proper Orthogonal Decomposition (POD) combines PCA with a reduced basis approach to reduce the number of degrees of freedom in parametric boundary value problems. Additionally, for systems whose solutions belong to nonlinear manifolds, kernel Proper Orthogonal Decomposition (kPOD) uses kPCA reduction to find a solution of the problem. The main features of kPOD are the use of local approximations, the possibility of enriching the reduced space with quadratic elements, the use of ad-hoc kernels that include previous knowledge of the input data, and the idea of using an iterative algorithm that explores the Voronoi diagram of the snapshots in the reduced space [2]. Besides, dimensionality reduction in combination with surrogate modelling aims at finding initial (and accurate) approximations of parametric systems without physics involved. All presented methodologies are shown to be strong tools in several fields. To show the potential of those techniques, here we present several examples of application in the biomechanical field, such as advection diffusion in scaffolds for tissue engineering, and vascular biomechanics
{"title":"Dimensionality reduction and physics-based manifold learning for parametric models in biomechanics and tissue engineering","authors":"A. Muixí, A. Garcia-Gonzalez, S. Zlotnik, P. Díez","doi":"10.23967/admos.2023.037","DOIUrl":"https://doi.org/10.23967/admos.2023.037","url":null,"abstract":"This work aims at describing dimensionality reduction methods, particularizing in Principal Component Analysis (PCA), the nonlinear version kernel Principal Component Analysis (kPCA) [1], and their potential application to data-assisted Credible models in biomechanics and tissue engineering. These methodologies are intended to discover the low dimensional manifold where an input physical data set lives. Reducing the dimensionality of a complex physical system is a potential tool towards real time Credible and accurate parametric models and patient-specific simulations. In this direction, the Proper Orthogonal Decomposition (POD) combines PCA with a reduced basis approach to reduce the number of degrees of freedom in parametric boundary value problems. Additionally, for systems whose solutions belong to nonlinear manifolds, kernel Proper Orthogonal Decomposition (kPOD) uses kPCA reduction to find a solution of the problem. The main features of kPOD are the use of local approximations, the possibility of enriching the reduced space with quadratic elements, the use of ad-hoc kernels that include previous knowledge of the input data, and the idea of using an iterative algorithm that explores the Voronoi diagram of the snapshots in the reduced space [2]. Besides, dimensionality reduction in combination with surrogate modelling aims at finding initial (and accurate) approximations of parametric systems without physics involved. All presented methodologies are shown to be strong tools in several fields. To show the potential of those techniques, here we present several examples of application in the biomechanical field, such as advection diffusion in scaffolds for tissue engineering, and vascular biomechanics","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114585913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Strategies for Frequency Domain MOR - A Comparative Framework","authors":"Q. Aumann, S. Chellappa, A. Nayak","doi":"10.23967/admos.2023.002","DOIUrl":"https://doi.org/10.23967/admos.2023.002","url":null,"abstract":"Minisymposium","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120945954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Zlotnik, C. Nasika, Pedro D´ıez, Pierre Gerard, Thierry Massart
Tailing dams are structures built up during the mining process by compacting successive layers of earth. They contain the (usually toxic) left over after the process of separating the valuable fraction from the uneconomic fraction of an ore. This kind of dams exhibit a high rate of sudden and hazardous failures and, therefore, monitoring its state is a key process in the mining industry. The recent surge in the availability of sensors (e.g. Internet of Things) allows enhancing the data that can be gathered to monitor the mechanical and hydraulic state of the dams. Numerical models, on the other hand, can be used to enrich the local information collected by the sensors and provide a global view of the state of the dam. Although, for monitoring purposes, numerical models are only useful if they provide results fast enough to react to an unsafe state. In this presentation we describe the results presented in [1] and [2], where model order reduction techniques are applied in the context of data assimilation to learn about the state of tailing dams. A transient nonlinear hydro-mechanical model describing the groundwater flow in unsaturated soil conditions is solved using Reduced Basis method [1]. Hyper-reduction techniques (DEIM, LDEM) are tested and show time gains up to 1 / 100 with respect to standard finite element methods [2].
{"title":"Assessment of tailings dams using Model Order Reduction","authors":"Sergio Zlotnik, C. Nasika, Pedro D´ıez, Pierre Gerard, Thierry Massart","doi":"10.23967/admos.2023.077","DOIUrl":"https://doi.org/10.23967/admos.2023.077","url":null,"abstract":"Tailing dams are structures built up during the mining process by compacting successive layers of earth. They contain the (usually toxic) left over after the process of separating the valuable fraction from the uneconomic fraction of an ore. This kind of dams exhibit a high rate of sudden and hazardous failures and, therefore, monitoring its state is a key process in the mining industry. The recent surge in the availability of sensors (e.g. Internet of Things) allows enhancing the data that can be gathered to monitor the mechanical and hydraulic state of the dams. Numerical models, on the other hand, can be used to enrich the local information collected by the sensors and provide a global view of the state of the dam. Although, for monitoring purposes, numerical models are only useful if they provide results fast enough to react to an unsafe state. In this presentation we describe the results presented in [1] and [2], where model order reduction techniques are applied in the context of data assimilation to learn about the state of tailing dams. A transient nonlinear hydro-mechanical model describing the groundwater flow in unsaturated soil conditions is solved using Reduced Basis method [1]. Hyper-reduction techniques (DEIM, LDEM) are tested and show time gains up to 1 / 100 with respect to standard finite element methods [2].","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114893071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A modified Constitutive Relation Error (mCRE) framework to learn nonlinear constitutive models from strain measurements with thermodynamics-consistent Neural Networks","authors":"A. Benady, L. Chamoin, E. Baranger","doi":"10.23967/admos.2023.020","DOIUrl":"https://doi.org/10.23967/admos.2023.020","url":null,"abstract":"","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129468809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolas Pistenon, S. Cantournet, J. Bouvard, D. P. Muñoz, P. Kerfriden
Neural network methods are increasingly used to build constitutive laws in computational mechanics [1]. Neural Networks may for instance be used a surrogates for micro-mechanical models, whereby evaluating the response of high-fidelity numerical representative volume elements proves prohibitively expensive. Alternatively, Neural Networks may be used whenever traditional phenomenological approaches to constitutive modelling fails, i.e. whenever one fails to find a functional form for the constitutive law that enables to represent the behaviour of the material faithfully over the entirety of possible loading scenarios. One example is the viscoelastic behaviour of polymers, which remains difficult to describe accurately. The state of the art on these machine learning methods for the prediction of behavioural laws with a dependence on loading history do not show models with both a strong interpolatory, extrapolatory capacity and with a number of data consistent with today’s experimental capabilities [2]. To enforce a better bias, one used mechanical knowledge by introducing some mechanical regularisation terms [3], [4] or to considered structural approaches [5]. In this work, we describe a novel Neural Network strategy that combines a Maxwell model, which is extensively used as to describe linear viscoelastic responses, and a Thermodynamic Recurrent Neural Network. The coupling between the phenomenological and data-driven blocks of our model is done in two ways. Firstly, the Neural Network, and more precisely LSTM cells, corrects the response provided by the Maxwell model, which closely resembles the residual connections
{"title":"Learning Viscoelastic Responses with a Thermodynamic Recurrent Neural Network with Maxwell Encoding","authors":"Nicolas Pistenon, S. Cantournet, J. Bouvard, D. P. Muñoz, P. Kerfriden","doi":"10.23967/admos.2023.022","DOIUrl":"https://doi.org/10.23967/admos.2023.022","url":null,"abstract":"Neural network methods are increasingly used to build constitutive laws in computational mechanics [1]. Neural Networks may for instance be used a surrogates for micro-mechanical models, whereby evaluating the response of high-fidelity numerical representative volume elements proves prohibitively expensive. Alternatively, Neural Networks may be used whenever traditional phenomenological approaches to constitutive modelling fails, i.e. whenever one fails to find a functional form for the constitutive law that enables to represent the behaviour of the material faithfully over the entirety of possible loading scenarios. One example is the viscoelastic behaviour of polymers, which remains difficult to describe accurately. The state of the art on these machine learning methods for the prediction of behavioural laws with a dependence on loading history do not show models with both a strong interpolatory, extrapolatory capacity and with a number of data consistent with today’s experimental capabilities [2]. To enforce a better bias, one used mechanical knowledge by introducing some mechanical regularisation terms [3], [4] or to considered structural approaches [5]. In this work, we describe a novel Neural Network strategy that combines a Maxwell model, which is extensively used as to describe linear viscoelastic responses, and a Thermodynamic Recurrent Neural Network. The coupling between the phenomenological and data-driven blocks of our model is done in two ways. Firstly, the Neural Network, and more precisely LSTM cells, corrects the response provided by the Maxwell model, which closely resembles the residual connections","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127340869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Shravani, Gujji Murali, †. MohanReddy, §. MichaelVynnycky
In this article, we present residual-based a posteriori error estimates for the parabolic partial differential equation (PDE) with small random input data in the L 2 P (Ω; L 2 (0 , T ; H 1 ( D )))-norm, where (Ω , F , P ) is a complete probability space, D is the physical domain, T > 0 is the final time. Such a class of PDEs arises due to a lack of complete understanding of the physical model. To this end, the perturbation technique [2019, Arch. Comput. Methods Eng., 26, pp. 1313-1377] is exploited to express the exact random solution in terms of the power series with respect to the uncertainty parameter, whence we obtain decoupled deterministic problems. Each problem is then discretized in space by the finite element method and advanced in time by the Crank-Nicolson scheme. Quadratic reconstructions are introduced to obtain optimal bounds in the temporal direction. The work generalizes the isotropic results obtained in [2009, SIAM J. Sci. Comput., 31, pp. 2757-2783] for the deterministic parabolic PDEs to the parabolic PDE with small random input data. Numerical results demonstrate the effectiveness of the bounds.
在本文中,我们提出了基于残差的后验误差估计的抛物型偏微分方程(PDE)与小随机输入数据在l2 P (Ω;l2 (0, t;H 1 (D))-范数,其中(Ω, F, P)为完全概率空间,D为物理域,T > 0为最终时间。这类偏微分方程的产生是由于缺乏对物理模型的完全理解。为此,摄动技术[2019,Arch。第一版。Eng方法。, 26, pp. 1313-1377]利用幂级数对不确定性参数表示精确随机解,由此我们得到解耦确定性问题。然后通过有限元方法在空间上离散每个问题,并通过Crank-Nicolson格式在时间上推进每个问题。在时间方向上引入二次重构以获得最优边界。本文推广了[2009,SIAM J. Sci.]第一版。确定性抛物型偏微分方程与小随机输入数据的抛物型偏微分方程[j]。数值结果证明了该边界的有效性。
{"title":"A posteriori error estimates for the Crank-Nicolson method: application to parabolic partial differential equations with small random input data","authors":"N. Shravani, Gujji Murali, †. MohanReddy, §. MichaelVynnycky","doi":"10.23967/admos.2023.029","DOIUrl":"https://doi.org/10.23967/admos.2023.029","url":null,"abstract":"In this article, we present residual-based a posteriori error estimates for the parabolic partial differential equation (PDE) with small random input data in the L 2 P (Ω; L 2 (0 , T ; H 1 ( D )))-norm, where (Ω , F , P ) is a complete probability space, D is the physical domain, T > 0 is the final time. Such a class of PDEs arises due to a lack of complete understanding of the physical model. To this end, the perturbation technique [2019, Arch. Comput. Methods Eng., 26, pp. 1313-1377] is exploited to express the exact random solution in terms of the power series with respect to the uncertainty parameter, whence we obtain decoupled deterministic problems. Each problem is then discretized in space by the finite element method and advanced in time by the Crank-Nicolson scheme. Quadratic reconstructions are introduced to obtain optimal bounds in the temporal direction. The work generalizes the isotropic results obtained in [2009, SIAM J. Sci. Comput., 31, pp. 2757-2783] for the deterministic parabolic PDEs to the parabolic PDE with small random input data. Numerical results demonstrate the effectiveness of the bounds.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125831001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Kvamsdal, A. Abdulhaque, M. Kumar, K. Johannessen, A. Kvarving, K. Okstad
In the recovery-based estimates method, we employ a projection technique to recover a post-processed quantity (usually the stresses or the gradient computed from the FE-approximation). The error is estimated by taking the difference between the recovered quantity and the FE-solution. An easy procedure to implement is the continuous global L2 (CGL2) recovery initially used for a posteriori error estimation by Zienkiewicz and Zhu [1]. Kumar, Kvamsdal and Johannessen [2] developed CGL2 and Superconvergent Patch Recovery (SPR) error estimation methods applicable for adaptive refinement using LR B-splines [3] and observed very good results for both the CGL2 and the SPR recovery technique. However, Cai and Zhang reported in [4] a case of malfunction for the CGL2-recovery applied to second order triangular and tetrahedral Lagrange finite element. Here we will start out by presenting a motivational example that illustrates the benefits of using high regularity splines in the CGL2 based gradient recovery procedure compared to using the classical Lagrange FEM basis functions. We will then show the performance on some benchmark problems comparing the use of splines
{"title":"High Continuity Basis’s Impact on Continuous Global L2 (CGL2) Recovery","authors":"T. Kvamsdal, A. Abdulhaque, M. Kumar, K. Johannessen, A. Kvarving, K. Okstad","doi":"10.23967/admos.2023.044","DOIUrl":"https://doi.org/10.23967/admos.2023.044","url":null,"abstract":"In the recovery-based estimates method, we employ a projection technique to recover a post-processed quantity (usually the stresses or the gradient computed from the FE-approximation). The error is estimated by taking the difference between the recovered quantity and the FE-solution. An easy procedure to implement is the continuous global L2 (CGL2) recovery initially used for a posteriori error estimation by Zienkiewicz and Zhu [1]. Kumar, Kvamsdal and Johannessen [2] developed CGL2 and Superconvergent Patch Recovery (SPR) error estimation methods applicable for adaptive refinement using LR B-splines [3] and observed very good results for both the CGL2 and the SPR recovery technique. However, Cai and Zhang reported in [4] a case of malfunction for the CGL2-recovery applied to second order triangular and tetrahedral Lagrange finite element. Here we will start out by presenting a motivational example that illustrates the benefits of using high regularity splines in the CGL2 based gradient recovery procedure compared to using the classical Lagrange FEM basis functions. We will then show the performance on some benchmark problems comparing the use of splines","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127746321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}