M. Kontou, X. Trompoukis, V. Asouti, K. Giannakoglou
In aerodynamic shape optimization, gradient-based algorithms usually rely on the adjoint method to compute gradients. Working with continuous adjoint offers a clear insight into the adjoint equations and their boundary conditions, but discretization schemes significantly affect the accuracy of gradients. On the other hand, discrete adjoint computes sensitivities consistent with the discretized flow equations, with a higher memory footprint though. This work bridges the gap between the two adjoint variants by proposing consistent discretization schemes (inspired by discrete adjoint) for the continuous adjoint PDEs and their boundary conditions, with a clear physical meaning. The capabilities of the new Think-Discrete-Do-Continuous adjoint are demonstrated, for inviscid flows of compressible fluids, in shape optimization in external aerodynamics.
{"title":"On the Discretization of the Continuous Adjoint to the Euler Equations in Aerodynamic Shape Optimization","authors":"M. Kontou, X. Trompoukis, V. Asouti, K. Giannakoglou","doi":"10.23967/admos.2023.056","DOIUrl":"https://doi.org/10.23967/admos.2023.056","url":null,"abstract":"In aerodynamic shape optimization, gradient-based algorithms usually rely on the adjoint method to compute gradients. Working with continuous adjoint offers a clear insight into the adjoint equations and their boundary conditions, but discretization schemes significantly affect the accuracy of gradients. On the other hand, discrete adjoint computes sensitivities consistent with the discretized flow equations, with a higher memory footprint though. This work bridges the gap between the two adjoint variants by proposing consistent discretization schemes (inspired by discrete adjoint) for the continuous adjoint PDEs and their boundary conditions, with a clear physical meaning. The capabilities of the new Think-Discrete-Do-Continuous adjoint are demonstrated, for inviscid flows of compressible fluids, in shape optimization in external aerodynamics.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131698021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Person, F. Hild, E. Nadal, J. Roderas, O. Allix
{"title":"Digital Volume Correlation techniques for patient-specific simulation of vertebrae with metastasis","authors":"L. Person, F. Hild, E. Nadal, J. Roderas, O. Allix","doi":"10.23967/admos.2023.035","DOIUrl":"https://doi.org/10.23967/admos.2023.035","url":null,"abstract":"","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133805521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive multilevel Monte Carlo for risk averse optimization","authors":"F. Nobile","doi":"10.23967/admos.2023.081","DOIUrl":"https://doi.org/10.23967/admos.2023.081","url":null,"abstract":"","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123159418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Additive manufacturing (AM) is a production method with great potential for creating complex geometries and reducing material and energy waste. Numerical simulations are crucial to minimize fabrication failures and optimize designs. Nevertheless, the high computational cost of simulating the multi-scale behaviour of AM processes is a challenge. To address this, an Arlequin-based method is proposed, which uses two distinct meshes to capture the high thermal gradients near the melt pool: a coarse mesh for the entire domain and a fine mesh that moves with the heating source. Additionally, a change of variable simplifies calculations on each time step by transforming the moving fine mesh into a fixed mesh. The proposed methodology has the potential to reduce computational costs and improve the efficiency of AM simulations.
{"title":"A Multiscale Method with Continuous Matter Addition in DED Additive Manufacturing Processes","authors":"M. Picos, Q. Quintela, J. Rodríguez, P. Barral","doi":"10.23967/admos.2023.053","DOIUrl":"https://doi.org/10.23967/admos.2023.053","url":null,"abstract":"Additive manufacturing (AM) is a production method with great potential for creating complex geometries and reducing material and energy waste. Numerical simulations are crucial to minimize fabrication failures and optimize designs. Nevertheless, the high computational cost of simulating the multi-scale behaviour of AM processes is a challenge. To address this, an Arlequin-based method is proposed, which uses two distinct meshes to capture the high thermal gradients near the melt pool: a coarse mesh for the entire domain and a fine mesh that moves with the heating source. Additionally, a change of variable simplifies calculations on each time step by transforming the moving fine mesh into a fixed mesh. The proposed methodology has the potential to reduce computational costs and improve the efficiency of AM simulations.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"346 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122162348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective of this research is to introduce a parametric identification strategy based on full field measurements obtained from digital image correlation (DIC) [1]. The optimal control method consists of segregating the equations pertaining to the modelling of the experiments into reliable and less reliable sets, and it does not require complete information of the boundaries, and measurement zone does not need to be on the complete structure. The proposed scheme is an extension of the optimal control method previously developed for determining elastic parameters [2], and herein the focus is on material parameters concerning non-linear behaviour like plasticity, damage and hardening. The optimal control approach which can be seen as a variant of the modified constitutive relation error (MCRE) method, considers the equivalence of the kinematic measurements and the model displacements to be the only unreliable equation. MCRE methods have been used previously for generalised standard materials where the constitutive behaviour can be expressed in terms of state laws and evolutions equations [3]. The resolution of the non-linear optimisation functional under non-linear constraint is achieved through an iterative solver such as the large time increment (LATIN) method. This method segregates the difficulty into a global linear set of equations and a non-linear local set of equations, and a space-time resolution is achieved through iterations between these two sets. Although usage of LATIN type iterative procedure in MCRE type method is not unprecedented [4], the usage of proper generalised decomposition (PGD) based reduced order approximation can be considered to be a novelty of this research. For plasticity behaviour, the quantities of interests are represented in separable variable forms (in space and
{"title":"A Reduced Order Approximation for Identification of Non-linear Material Parameters using Optimal Control Method","authors":"M. Bhattacharyya, P. Feissel","doi":"10.23967/admos.2023.024","DOIUrl":"https://doi.org/10.23967/admos.2023.024","url":null,"abstract":"The objective of this research is to introduce a parametric identification strategy based on full field measurements obtained from digital image correlation (DIC) [1]. The optimal control method consists of segregating the equations pertaining to the modelling of the experiments into reliable and less reliable sets, and it does not require complete information of the boundaries, and measurement zone does not need to be on the complete structure. The proposed scheme is an extension of the optimal control method previously developed for determining elastic parameters [2], and herein the focus is on material parameters concerning non-linear behaviour like plasticity, damage and hardening. The optimal control approach which can be seen as a variant of the modified constitutive relation error (MCRE) method, considers the equivalence of the kinematic measurements and the model displacements to be the only unreliable equation. MCRE methods have been used previously for generalised standard materials where the constitutive behaviour can be expressed in terms of state laws and evolutions equations [3]. The resolution of the non-linear optimisation functional under non-linear constraint is achieved through an iterative solver such as the large time increment (LATIN) method. This method segregates the difficulty into a global linear set of equations and a non-linear local set of equations, and a space-time resolution is achieved through iterations between these two sets. Although usage of LATIN type iterative procedure in MCRE type method is not unprecedented [4], the usage of proper generalised decomposition (PGD) based reduced order approximation can be considered to be a novelty of this research. For plasticity behaviour, the quantities of interests are represented in separable variable forms (in space and","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124628134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The numerical simulation of contact mechanics problems is computationally challenging, as these problems are locally highly non-linear and non-regular. Efficient numerical solutions of such problems usually rely on adaptive mesh refinement (AMR). Even if efficient parallelizations of standard AMR techniques as h-adaptive methods begin to appear [1], their combination with contact mechanics problems remains a challenging task. Indeed, current developments on algorithms for contact mechanics problems are focusing either on non-parallelized new adaptive mesh refinement methods [2] or on parallelization methods for uniform refinement meshes [3,4]. The purpose of this work is to introduce a High Performance Computing strategy for solving 3D contact elastostatics problems with AMR on hexahedral elements. The contact is treated by a node-to-node algorithm with a penalization technique in order to deal with primal variables only. Therefore, this algorithm presents the advantages of well modelling the studied phenomenon while not increasing the number of unknowns and not modifying the formulation in an intrusive manner. Concerning the AMR strategy, we rely on a non-conforming h-adaptive refinement solution. This method has already shown to be well scalable [1,7]. Regarding the detection of the refinement zones, a Zienkiewicz-Zhu (ZZ) type error estimator is used to select the elements to be refined through a local detection criterion [5]. In addition, a geometric-based stopping criterion is applied in order to automatically stop the refinement process, even in case of local singularities. This combined strategy has recently proven its efficiency [6]. In this contribution, we endeavor to extend the combination of these contact mechanics and AMR strategies to a parallel framework. In order to carry
{"title":"Massively Parallel Simulation and Adaptive Mesh Refinement for 3D Elastostatics Contact Mechanics Problems","authors":"A. Epalle, I. Ramière, G. Latu, F. Lebon","doi":"10.23967/admos.2023.061","DOIUrl":"https://doi.org/10.23967/admos.2023.061","url":null,"abstract":"The numerical simulation of contact mechanics problems is computationally challenging, as these problems are locally highly non-linear and non-regular. Efficient numerical solutions of such problems usually rely on adaptive mesh refinement (AMR). Even if efficient parallelizations of standard AMR techniques as h-adaptive methods begin to appear [1], their combination with contact mechanics problems remains a challenging task. Indeed, current developments on algorithms for contact mechanics problems are focusing either on non-parallelized new adaptive mesh refinement methods [2] or on parallelization methods for uniform refinement meshes [3,4]. The purpose of this work is to introduce a High Performance Computing strategy for solving 3D contact elastostatics problems with AMR on hexahedral elements. The contact is treated by a node-to-node algorithm with a penalization technique in order to deal with primal variables only. Therefore, this algorithm presents the advantages of well modelling the studied phenomenon while not increasing the number of unknowns and not modifying the formulation in an intrusive manner. Concerning the AMR strategy, we rely on a non-conforming h-adaptive refinement solution. This method has already shown to be well scalable [1,7]. Regarding the detection of the refinement zones, a Zienkiewicz-Zhu (ZZ) type error estimator is used to select the elements to be refined through a local detection criterion [5]. In addition, a geometric-based stopping criterion is applied in order to automatically stop the refinement process, even in case of local singularities. This combined strategy has recently proven its efficiency [6]. In this contribution, we endeavor to extend the combination of these contact mechanics and AMR strategies to a parallel framework. In order to carry","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131354620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SuperAdjoint: Super-Resolution Neural Networks in Adjoint-based Output Error Estimation","authors":"T. Hunter, S. Hulsoff, A. Sitaram","doi":"10.23967/admos.2023.058","DOIUrl":"https://doi.org/10.23967/admos.2023.058","url":null,"abstract":"","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116001153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary. This work demonstrates how to use a piggyback-style algorithm to compute derivatives of loss functions that depend on solutions of convex-concave saddle-point problems. Two application scenarios are presented, where the piggyback primal-dual al-gorithm is used to learn an enhanced shearlet transform and an improved discretization of the second-order total generalized variation.
{"title":"A Piggyback-Style Algorithm for Learning Improved Shearlets and TGV Discretizations","authors":"L. Bogensperger, A. Chambolle, T. Pock","doi":"10.23967/admos.2023.013","DOIUrl":"https://doi.org/10.23967/admos.2023.013","url":null,"abstract":"Summary. This work demonstrates how to use a piggyback-style algorithm to compute derivatives of loss functions that depend on solutions of convex-concave saddle-point problems. Two application scenarios are presented, where the piggyback primal-dual al-gorithm is used to learn an enhanced shearlet transform and an improved discretization of the second-order total generalized variation.","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116946766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many applications in Computer Aided Engineering, like parametric studies, structural optimization or virtual material design, a large number of almost similar models have to be simulated. Although the individual scenarios may differ only marginally in both space and time, the same effort is invested for every single new simulation with no account for experience and knowledge from previous simulations. Therefore, we have developed a method that combines Model Order Reduction (MOR), surrogate modeling and the reuse of simulation data, thus exploiting knowledge from previous simulation runs to accelerate computations in multi-query contexts. MOR allows reducing model fidelity in space and time without significantly deteriorating accuracy. By reusing simulation data, a predictor or preconditioner can be obtained from a learned surrogate model to be used in subsequent simulations. The efficiency of the method is showcased by the exact computation of critical points encountered in nonlinear structural analysis, such as limit and bifurcation points, by the method of extended systems [1] for systems that depend on a set of design parameters, like material or geometric properties. Such critical points are of utmost engineering significance due to the special characteristics of the structural behavior in their vicinity. Using classical reanalysis methods, like the fold line analysis [2], the computation of critical points of almost similar systems can be accelerated. This technology is limited, however, by the fact that only small parameter variations are possible. Otherwise, the algorithm may not converge to the correct solution or fail to converge. The newly developed data-based “reduced model reanalysis” method overcomes
{"title":"Accelerated Simulation via Combination of Model Reduction, Surrogate Modeling and Reuse of Simulation Data","authors":"A. Strauß, J. Kneifl, J. Fehr, M. Bischoff","doi":"10.23967/admos.2023.003","DOIUrl":"https://doi.org/10.23967/admos.2023.003","url":null,"abstract":"In many applications in Computer Aided Engineering, like parametric studies, structural optimization or virtual material design, a large number of almost similar models have to be simulated. Although the individual scenarios may differ only marginally in both space and time, the same effort is invested for every single new simulation with no account for experience and knowledge from previous simulations. Therefore, we have developed a method that combines Model Order Reduction (MOR), surrogate modeling and the reuse of simulation data, thus exploiting knowledge from previous simulation runs to accelerate computations in multi-query contexts. MOR allows reducing model fidelity in space and time without significantly deteriorating accuracy. By reusing simulation data, a predictor or preconditioner can be obtained from a learned surrogate model to be used in subsequent simulations. The efficiency of the method is showcased by the exact computation of critical points encountered in nonlinear structural analysis, such as limit and bifurcation points, by the method of extended systems [1] for systems that depend on a set of design parameters, like material or geometric properties. Such critical points are of utmost engineering significance due to the special characteristics of the structural behavior in their vicinity. Using classical reanalysis methods, like the fold line analysis [2], the computation of critical points of almost similar systems can be accelerated. This technology is limited, however, by the fact that only small parameter variations are possible. Otherwise, the algorithm may not converge to the correct solution or fail to converge. The newly developed data-based “reduced model reanalysis” method overcomes","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"7 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132880875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic image segmentation is a key process in many applications of science and engineering, from medical imaging to autonomous vehicle driving and smart agriculture monitoring. In these contexts, the presence of spatial inhomogeneities and noise challenges the robustness of segmentation strategies [1]. In this talk, a finite element-based segmentation algorithm handling images with different spatial patterns is presented. The methodology relies on a split Bregman algorithm for the minimisation of a region-based Bayesian energy functional and on an anisotropic recovery-based error estimate to drive mesh adaptation [2]. On the one hand, a Bayesian model is considered to exploit the intrinsic spatial information in inhomogeneous images [3]. To address the ill-posedness of the associated optimisation problem, a convexification technique [4] is coupled with a split Bregman algorithm for the minimisation of the regularised functional [5]. On the other hand, an anisotropic mesh adaptation procedure guarantees a smooth description of the interface between background and foreground of the image, without jagged details [2,6]. The proper alignment, sizing and shaping of the anisotropically adapted mesh elements guarantee that the increased precision is achieved with a reduced number of degrees of freedom [2,6]. Numerical experiments will be presented to showcase the performance of the resulting split-adapt Bregman algorithm on synthetic and real images featuring inhomogeneous spatial patterns. The method outperforms the standard split Bregman approach, providing accurate and robust results even in the presence of Gaussian,
{"title":"Segmentation of Inhomogeneous Noisy Images via a Bayesian Model Coupled with Anisotropic Mesh Adaptation","authors":"M. Giacomini, S. Perotto","doi":"10.23967/admos.2023.014","DOIUrl":"https://doi.org/10.23967/admos.2023.014","url":null,"abstract":"Automatic image segmentation is a key process in many applications of science and engineering, from medical imaging to autonomous vehicle driving and smart agriculture monitoring. In these contexts, the presence of spatial inhomogeneities and noise challenges the robustness of segmentation strategies [1]. In this talk, a finite element-based segmentation algorithm handling images with different spatial patterns is presented. The methodology relies on a split Bregman algorithm for the minimisation of a region-based Bayesian energy functional and on an anisotropic recovery-based error estimate to drive mesh adaptation [2]. On the one hand, a Bayesian model is considered to exploit the intrinsic spatial information in inhomogeneous images [3]. To address the ill-posedness of the associated optimisation problem, a convexification technique [4] is coupled with a split Bregman algorithm for the minimisation of the regularised functional [5]. On the other hand, an anisotropic mesh adaptation procedure guarantees a smooth description of the interface between background and foreground of the image, without jagged details [2,6]. The proper alignment, sizing and shaping of the anisotropically adapted mesh elements guarantee that the increased precision is achieved with a reduced number of degrees of freedom [2,6]. Numerical experiments will be presented to showcase the performance of the resulting split-adapt Bregman algorithm on synthetic and real images featuring inhomogeneous spatial patterns. The method outperforms the standard split Bregman approach, providing accurate and robust results even in the presence of Gaussian,","PeriodicalId":414984,"journal":{"name":"XI International Conference on Adaptive Modeling and Simulation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134614706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}