Low order implicit time integration schemes play a key role for time integration in several fields of computational mechanics, such as for the heat equation or inelastic constitutive equations, respectively. Embedded Runge-Kutta (RK) methods provide an attractive methodology by means of an adaptive time step size control. According to Fehlbergs suggestion, only one extra function calculation is required to estimate the local error of the embedded method. In the present paper, this methodology is applied to several prominent low order implicit RK-schemes, such as the first order implicit Euler-method, the second order trapezoidal rule and the second order Ellsiepen method. Its advantages are stability and comparatively low computational cost, however, they require the solution of a nonlinear system of equations. This paper presents a general approach for the construction of third order Runge-Kutta methods by embedding the above mentioned implicit schemes into the class of ELDIRK-methods. These will be defined to have an explicit last stage in the general Butcher array of Diagonal Implicit Runge-Kutta (DIRK) methods, with the consequence, that no additional system of equations must be solved. The main results – valid also for non-linear ordinary differential equations – are as follows: Two extra function calculations are required in order to embed the implicit Euler-method and one extra function calculation is required for the trapezoidal-rule and the Ellsiepen method, in order to obtain the third order properties, respectively. The methodology is applied to two different goal functions in terms of the standard global error, that is, a time point goal function and a time integrated goal function. Two numerical examples are concerned with a parachute with viscous damping and a two-dimensional laser beam simulation. Here, we verify the higher order convergence behaviours of the proposed new ELDIRK-methods, and its successful performances for asymptotically exact global error estimation of so-called reversed embedded RK-method are shown
{"title":"Runge Kutta (ELDIRK) methods for embedding of low order implicit time integration schemes for goal oriented global error estimation","authors":"R. Mahnken","doi":"10.23967/admos.2023.050","DOIUrl":"https://doi.org/10.23967/admos.2023.050","url":null,"abstract":"Low order implicit time integration schemes play a key role for time integration in several fields of computational mechanics, such as for the heat equation or inelastic constitutive equations, respectively. Embedded Runge-Kutta (RK) methods provide an attractive methodology by means of an adaptive time step size control. According to Fehlbergs suggestion, only one extra function calculation is required to estimate the local error of the embedded method. In the present paper, this methodology is applied to several prominent low order implicit RK-schemes, such as the first order implicit Euler-method, the second order trapezoidal rule and the second order Ellsiepen method. Its advantages are stability and comparatively low computational cost, however, they require the solution of a nonlinear system of equations. This paper presents a general approach for the construction of third order Runge-Kutta methods by embedding the above mentioned implicit schemes into the class of ELDIRK-methods. These will be defined to have an explicit last stage in the general Butcher array of Diagonal Implicit Runge-Kutta (DIRK) methods, with the consequence, that no additional system of equations must be solved. The main results – valid also for non-linear ordinary differential equations – are as follows: Two extra function calculations are required in order to embed the implicit Euler-method and one extra function calculation is required for the trapezoidal-rule and the Ellsiepen method, in order to obtain the third order properties, respectively. The methodology is applied to two different goal functions in terms of the standard global error, that is, a time point goal function and a time integrated goal function. Two numerical examples are concerned with a parachute with viscous damping and a two-dimensional laser beam simulation. Here, we verify the higher order convergence behaviours of the proposed new ELDIRK-methods, and its successful performances for asymptotically exact global error estimation of so-called reversed embedded RK-method are shown","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132499170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Following the advances in measurement technology and its vast availability, mechanical systems and structures are increasingly equipped with sensors to obtain continuous information regarding the system state. Coupled with robust numerical models, this information can be used to build a numerical twin of the structure that is linked to its physical twin via a feedback loop. This results in the concept of Dynamic Data Driven Application Systems (DDDAS) that can predict and control the evolution of the physical phenomena at stake on the structure, as well as dynamically updating the numerical model with the help of real-time measurements [1, 2]. The physical evolution control is not addressed here, as the focus is mainly on the model updating part of the DDDAS process. This step requires data assimilation and sequentially solving a potentially ill-posed inverse problem. A robust approach towards solving inverse problems regarding numerical models with experimental inputs is the modified Constitutive Relation Error (mCRE) [3]. One of the critical features of this method is the distinction between reliable and unreliable information so that only reliable ones, such as equilibrium, known boundary conditions, and sensor positions, are strongly imposed in the definition of the functional. In contrast, unreliable information, namely constitutive relation, unknown boundary conditions, and sensor measurements, are dealt with in a more relaxed sense. This energy-based functional can be conceived as a least squares minimization problem on measurement error, regularized by a model error term, aka Constitutive Relation Error (
{"title":"Model updating with a Modified Dual Kalman Filter acting on distributed strain measurements","authors":"S. Farahbakhsh, L. Chamoin, M. Poncelet","doi":"10.23967/admos.2023.021","DOIUrl":"https://doi.org/10.23967/admos.2023.021","url":null,"abstract":"Following the advances in measurement technology and its vast availability, mechanical systems and structures are increasingly equipped with sensors to obtain continuous information regarding the system state. Coupled with robust numerical models, this information can be used to build a numerical twin of the structure that is linked to its physical twin via a feedback loop. This results in the concept of Dynamic Data Driven Application Systems (DDDAS) that can predict and control the evolution of the physical phenomena at stake on the structure, as well as dynamically updating the numerical model with the help of real-time measurements [1, 2]. The physical evolution control is not addressed here, as the focus is mainly on the model updating part of the DDDAS process. This step requires data assimilation and sequentially solving a potentially ill-posed inverse problem. A robust approach towards solving inverse problems regarding numerical models with experimental inputs is the modified Constitutive Relation Error (mCRE) [3]. One of the critical features of this method is the distinction between reliable and unreliable information so that only reliable ones, such as equilibrium, known boundary conditions, and sensor positions, are strongly imposed in the definition of the functional. In contrast, unreliable information, namely constitutive relation, unknown boundary conditions, and sensor measurements, are dealt with in a more relaxed sense. This energy-based functional can be conceived as a least squares minimization problem on measurement error, regularized by a model error term, aka Constitutive Relation Error (","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129878271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relaxation of an over-constrained thermal problem for the determination of a geophysical temperature distribution","authors":"M. Fernández, P. Díez, S. Zlotnik","doi":"10.23967/admos.2023.078","DOIUrl":"https://doi.org/10.23967/admos.2023.078","url":null,"abstract":".","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130746601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Mittal, T. Dzanic, J. Yang, S. Petrides, D. Kim, B. Keith, A. Gillette, R. Anderson, D. Faissol
.
.
{"title":"DynAMO: Dynamic Anticipatory Mesh Optimization for Hyperbolic PDEs using Reinforcement Learning","authors":"K. Mittal, T. Dzanic, J. Yang, S. Petrides, D. Kim, B. Keith, A. Gillette, R. Anderson, D. Faissol","doi":"10.23967/admos.2023.059","DOIUrl":"https://doi.org/10.23967/admos.2023.059","url":null,"abstract":".","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131254838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this talk, we develop a mesh adaptive algorithm that combines a posteriori error estimation with bubble-type local mesh generation (BLMG) strategy for elliptic differential equations. The proposed node-based adaptive mesh generation method consists of four components: mesh size modification, a node placement procedure, a node-based local mesh generation strategy and an error estimation technique, which are combined so as to guarantee obtaining a conforming refined/coarsened mesh. The advantages of the BLMG-based adaptive finite element method, compared with other known methods, are given as follows: the refining and coarsening are obtained fluently in the same framework; the local a posteriori error estimation is easy to implement through the adjacency list of the BLMG method; at all levels of refinement, the updated triangles remain very well shaped, even if the mesh size at any particular refinement level varies by several orders of magnitude. Further, the parallel version of BLMG method employing ParMETIS-based dynamic domain decomposition method is also developed. The node-based distributed mesh structure is designed to reduce the communication amount spent in mesh generation and finite element calculation. Several numerical examples are carried out to verify the high efficiency of the algorithm.
{"title":"Adaptive and Parallel Local Mesh Generation Method and its Application","authors":"Weiwei Zhang, Wei Guo, Yufeng Nie","doi":"10.23967/admos.2023.068","DOIUrl":"https://doi.org/10.23967/admos.2023.068","url":null,"abstract":"In this talk, we develop a mesh adaptive algorithm that combines a posteriori error estimation with bubble-type local mesh generation (BLMG) strategy for elliptic differential equations. The proposed node-based adaptive mesh generation method consists of four components: mesh size modification, a node placement procedure, a node-based local mesh generation strategy and an error estimation technique, which are combined so as to guarantee obtaining a conforming refined/coarsened mesh. The advantages of the BLMG-based adaptive finite element method, compared with other known methods, are given as follows: the refining and coarsening are obtained fluently in the same framework; the local a posteriori error estimation is easy to implement through the adjacency list of the BLMG method; at all levels of refinement, the updated triangles remain very well shaped, even if the mesh size at any particular refinement level varies by several orders of magnitude. Further, the parallel version of BLMG method employing ParMETIS-based dynamic domain decomposition method is also developed. The node-based distributed mesh structure is designed to reduce the communication amount spent in mesh generation and finite element calculation. Several numerical examples are carried out to verify the high efficiency of the algorithm.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122864892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephan Gahima, Pedro D´ıez, Marco Stefanati, Jos´e F´elix, Rodr´ıguez Matas, Alberto Garc´ıa-Gonz´alez
Atherosclerotic plaques (within the coronaries) could produce stenosis and blood flux to decrease in the vessel, thrombosis, or rupture. Typically a patient presents either stable or unstable (low or high risk of rupture) plaque. A fast diagnosis identifying to which of these two groups the patient belongs to is crucial for its treatment and disposition [1]. A combination of biomechanical and image-based markers may help to improve the differentiation of these two groups of patients [2, 3]. In this regard, a methodology to determine these biomechanical markers to be seamlessly incorporated into the clinical pipeline is of great use and facilitates the translation of this technology to the clinic. To deal with patient-specific data-driven models, we aim for flexibility, supporting all cases on the same mesh using an unfitted approach. Thus, we propose an unfitted immersed boundary-based methodology in addition to a more physical elastic-bed boundary condition to analyze coronary artery sections undergoing uniform pressure in a quasi-static regime. The framework assumes a linear elastic behavior for the coronary artery components. The elastic bed represents the materials (assumed to have linear properties and characterized by α , the elastic bed coefficient) surrounding the artery. This modeling approach guarantees the uniqueness of the solution while obtaining more physical displacements and stresses when compared with a classical Dirichlet boundary condition. With a modified version of hierarchical level
{"title":"Towards patient-specific modelling of Atherosclerotic Arterial Sections","authors":"Stephan Gahima, Pedro D´ıez, Marco Stefanati, Jos´e F´elix, Rodr´ıguez Matas, Alberto Garc´ıa-Gonz´alez","doi":"10.23967/admos.2023.036","DOIUrl":"https://doi.org/10.23967/admos.2023.036","url":null,"abstract":"Atherosclerotic plaques (within the coronaries) could produce stenosis and blood flux to decrease in the vessel, thrombosis, or rupture. Typically a patient presents either stable or unstable (low or high risk of rupture) plaque. A fast diagnosis identifying to which of these two groups the patient belongs to is crucial for its treatment and disposition [1]. A combination of biomechanical and image-based markers may help to improve the differentiation of these two groups of patients [2, 3]. In this regard, a methodology to determine these biomechanical markers to be seamlessly incorporated into the clinical pipeline is of great use and facilitates the translation of this technology to the clinic. To deal with patient-specific data-driven models, we aim for flexibility, supporting all cases on the same mesh using an unfitted approach. Thus, we propose an unfitted immersed boundary-based methodology in addition to a more physical elastic-bed boundary condition to analyze coronary artery sections undergoing uniform pressure in a quasi-static regime. The framework assumes a linear elastic behavior for the coronary artery components. The elastic bed represents the materials (assumed to have linear properties and characterized by α , the elastic bed coefficient) surrounding the artery. This modeling approach guarantees the uniqueness of the solution while obtaining more physical displacements and stresses when compared with a classical Dirichlet boundary condition. With a modified version of hierarchical level","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132494383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we propose a new adaptive mesh refinement (AMR) method applied on isotropic oc-tree/quadtree meshes. The new AMR approach uses a metric-based linear interpolation error estimation [2] extended to square/cubic elements. The analysis of various examples shows that the minimization of the total numerical error can lead to a suboptimal mesh in terms of pure interpolation error. The grids that minimize the error for different values of N (the number of elements imposed) is related to a fixed ratio between the minimal and mean cell size named the compression ratio. Above a certain value, a clear proportionality between the interpolation and the total error allows us to use the former as a criterion to adapt the grid. However, below a certain critical value of the compression ratio, no correlation between both errors is observed and the interpolation error is no longer representative of the total error contained in the solution. Based on these results, we propose to add a model to estimate the discrete minimum grid size and to impose it as an additional constrain to the error minimization problem. The proposed minimum grid size depends on (i) the structure of the solution, (ii) the number of grid points specified and (iii) a security coefficient defined such that it controls the distance between the optimal pure interpolation error and the targeted performance. By increasing this user defined parameter we show that we effectively restrict the range of the minimization problem to regions where we can safely use the local estimation of the interpolation error to drive the mesh adaptation and reduce the total numerical error. The method is implemented in our in-house open-source solver Basilisk [1, 3] and the performance of our new approach is validated on a Poisson-Helmholtz solver and an incompressible Euler
{"title":"Error control and propagation in Adaptive Mesh Refinement applied to elliptic equations on quadtree/octree grids","authors":"L. Prouvost, A. Belme, D. Fuster","doi":"10.23967/admos.2023.030","DOIUrl":"https://doi.org/10.23967/admos.2023.030","url":null,"abstract":"In this work we propose a new adaptive mesh refinement (AMR) method applied on isotropic oc-tree/quadtree meshes. The new AMR approach uses a metric-based linear interpolation error estimation [2] extended to square/cubic elements. The analysis of various examples shows that the minimization of the total numerical error can lead to a suboptimal mesh in terms of pure interpolation error. The grids that minimize the error for different values of N (the number of elements imposed) is related to a fixed ratio between the minimal and mean cell size named the compression ratio. Above a certain value, a clear proportionality between the interpolation and the total error allows us to use the former as a criterion to adapt the grid. However, below a certain critical value of the compression ratio, no correlation between both errors is observed and the interpolation error is no longer representative of the total error contained in the solution. Based on these results, we propose to add a model to estimate the discrete minimum grid size and to impose it as an additional constrain to the error minimization problem. The proposed minimum grid size depends on (i) the structure of the solution, (ii) the number of grid points specified and (iii) a security coefficient defined such that it controls the distance between the optimal pure interpolation error and the targeted performance. By increasing this user defined parameter we show that we effectively restrict the range of the minimization problem to regions where we can safely use the local estimation of the interpolation error to drive the mesh adaptation and reduce the total numerical error. The method is implemented in our in-house open-source solver Basilisk [1, 3] and the performance of our new approach is validated on a Poisson-Helmholtz solver and an incompressible Euler","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131170345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Highly deforming domains are a recurring problem in fluid mechanics. In domains bounded by a free surface, for instance, the evolving boundaries need to be accurately represented at all times. In such situations, Lagrangian methods are a judicious choice for their ability to track material points in an evolving domain. The Particle Finite Element Method[1], or PFEM, has the ability to capture such strong domain deformations. In the PFEM, the fluid is represented by a set of particles. At each time step, these particles are triangulated. The conservation equations are solved on this triangulation using the finite element method to obtain the material velocity of each particle. Using this velocity, the particles’ positions are updated, resulting in a deformed domain which can be triangulated again at the next time step. It is important to note that merely triangulating the particles is not enough. Indeed, there is no unique definition of the boundary of a set of points in 2D or 3D. A geometrical algorithm, known as the α -shape of a triangulation[2], is therefore employed to define the shape of the fluid domain. Since this algorithm depends on quality and size aspects of the elements in the triangulation, properly adapting the mesh is key to the success of the method. In this work, we propose an approach to adapt the mesh with theoretical guarantees of quality. The approach is based on Delaunay refinement strategies[3], allowing to adapt the mesh while maintaining high quality elements. The interest of using Delaunay Refinement techniques is twofold. First of all, the algorithm for the domain boundary recognition, the α -shape, is strongly connected to the Delaunay triangulation
高变形域是流体力学中一个反复出现的问题。例如,在以自由曲面为界的域中,需要始终准确地表示不断变化的边界。在这种情况下,拉格朗日方法是一个明智的选择,因为它们能够在一个不断变化的区域中跟踪材料点。粒子有限元法[1](Particle Finite Element Method,简称PFEM)就有能力捕捉到这种强烈的区域变形。在PFEM中,流体由一组粒子表示。在每一个时间步,这些粒子被三角化。在三角剖分上用有限元法求解守恒方程,得到各质点的速度。利用这个速度,粒子的位置被更新,从而产生一个变形的区域,可以在下一个时间步骤中再次进行三角剖分。重要的是要注意,仅仅对粒子进行三角化是不够的。事实上,在二维或三维中,对于一组点的边界并没有唯一的定义。因此,一种称为三角形的α形状的几何算法[2]被用来定义流体域的形状。由于该算法取决于三角剖分中元素的质量和大小,因此适当地调整网格是该方法成功的关键。在这项工作中,我们提出了一种方法来适应网格与质量的理论保证。该方法基于Delaunay细化策略[3],允许在保持高质量元素的同时调整网格。使用Delaunay精化技术的好处是双重的。首先,区域边界识别算法(α形)与Delaunay三角剖分密切相关
{"title":"A Mesh Adaptation algorithm for highly deforming domains in the Particle Finite Element Method","authors":"T. Leyssens, J. Remacle","doi":"10.23967/admos.2023.062","DOIUrl":"https://doi.org/10.23967/admos.2023.062","url":null,"abstract":"Highly deforming domains are a recurring problem in fluid mechanics. In domains bounded by a free surface, for instance, the evolving boundaries need to be accurately represented at all times. In such situations, Lagrangian methods are a judicious choice for their ability to track material points in an evolving domain. The Particle Finite Element Method[1], or PFEM, has the ability to capture such strong domain deformations. In the PFEM, the fluid is represented by a set of particles. At each time step, these particles are triangulated. The conservation equations are solved on this triangulation using the finite element method to obtain the material velocity of each particle. Using this velocity, the particles’ positions are updated, resulting in a deformed domain which can be triangulated again at the next time step. It is important to note that merely triangulating the particles is not enough. Indeed, there is no unique definition of the boundary of a set of points in 2D or 3D. A geometrical algorithm, known as the α -shape of a triangulation[2], is therefore employed to define the shape of the fluid domain. Since this algorithm depends on quality and size aspects of the elements in the triangulation, properly adapting the mesh is key to the success of the method. In this work, we propose an approach to adapt the mesh with theoretical guarantees of quality. The approach is based on Delaunay refinement strategies[3], allowing to adapt the mesh while maintaining high quality elements. The interest of using Delaunay Refinement techniques is twofold. First of all, the algorithm for the domain boundary recognition, the α -shape, is strongly connected to the Delaunay triangulation","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116898200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}