We present an algorithm for constructing efficient surrogate frequency-domain models of (nonlinear) parametric dynamical systems in a non-intrusive way. To capture the dependence of the underlying system on frequency and parameters, our proposed approach combines rational approximation and smooth interpolation. In the approximation effort, locally adaptive sparse grids are applied to effectively explore the parameter domain even if the number of parameters is modest or high. Adaptivity is also employed to build rational approximations that efficiently capture the frequency dependence of the problem. These two features enable our method to build surrogate models that achieve a user-prescribed approximation accuracy, without wasting resources in “oversampling” the frequency and parameter domains. Thanks to its non-intrusiveness, our proposed method, as opposed to projection-based techniques for model order reduction, can be applied regardless of the complexity of the underlying physical model. Notably, our algorithm for adaptive sampling can be used even when prior knowledge of the problem structure is not available. To showcase the effectiveness of our approach, we apply it in the study of an aerodynamic bearing. Our method allows us to build surrogate models that adequately identify the bearing's behavior with respect to both design and operational parameters, while still achieving significant speedups.
{"title":"Plug-and-play adaptive surrogate modeling of parametric nonlinear dynamics in frequency domain","authors":"Phillip Huwiler, Davide Pradovera, Jürg Schiffmann","doi":"10.1002/nme.7487","DOIUrl":"10.1002/nme.7487","url":null,"abstract":"<p>We present an algorithm for constructing efficient surrogate frequency-domain models of (nonlinear) parametric dynamical systems in a <i>non-intrusive</i> way. To capture the dependence of the underlying system on frequency and parameters, our proposed approach combines rational approximation and smooth interpolation. In the approximation effort, locally adaptive sparse grids are applied to effectively explore the parameter domain even if the number of parameters is modest or high. Adaptivity is also employed to build rational approximations that efficiently capture the frequency dependence of the problem. These two features enable our method to build surrogate models that achieve a user-prescribed approximation accuracy, without wasting resources in “oversampling” the frequency and parameter domains. Thanks to its non-intrusiveness, our proposed method, as opposed to projection-based techniques for model order reduction, can be applied regardless of the complexity of the underlying physical model. Notably, our algorithm for adaptive sampling can be used even when prior knowledge of the problem structure is not available. To showcase the effectiveness of our approach, we apply it in the study of an aerodynamic bearing. Our method allows us to build surrogate models that adequately identify the bearing's behavior with respect to both design and operational parameters, while still achieving significant speedups.</p>","PeriodicalId":13699,"journal":{"name":"International Journal for Numerical Methods in Engineering","volume":"125 14","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/nme.7487","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. J. B. Theulings, R. Maas, L. Noël, F. van Keulen, M. Langelaar
In topology optimization of transient problems, memory requirements and computational costs often become prohibitively large due to the backward-in-time adjoint equations. Common approaches such as the Checkpointing (CP) and Local-in-Time (LT) algorithms reduce memory requirements by dividing the temporal domain into intervals and by computing sensitivities on one interval at a time. The CP algorithm reduces memory by recomputing state solutions instead of storing them. This leads to a significant increase in computational cost. The LT algorithm introduces approximations in the adjoint solution to reduce memory requirements and leads to a minimal increase in computational effort. However, we show that convergence can be hampered using the LT algorithm due to errors in approximate adjoints. To reduce memory and/or computational time, we present two novel algorithms. The hybrid Checkpointing/Local-in-Time (CP/LT) algorithm improves the convergence behavior of the LT algorithm at the cost of an increased computational time but remains more efficient than the CP algorithm. The Parallel-Local-in-Time (PLT) algorithm reduces the computational time through a temporal parallelization in which state and adjoint equations are solved simultaneously on multiple intervals. State and adjoint fields converge concurrently with the design. The effectiveness of each approach is illustrated with two-dimensional density-based topology optimization problems involving transient thermal or flow physics. Compared to the other discussed algorithms, we found a significant decrease in computational time for the PLT algorithm. Moreover, we show that under certain conditions, due to the use of approximations in the LT and PLT algorithms, they exhibit a bias toward designs with short characteristic times. Finally, based on the required memory reduction, computational cost, and convergence behavior of optimization problems, guidelines are provided for selecting the appropriate algorithms.
{"title":"Reducing time and memory requirements in topology optimization of transient problems","authors":"M. J. B. Theulings, R. Maas, L. Noël, F. van Keulen, M. Langelaar","doi":"10.1002/nme.7461","DOIUrl":"10.1002/nme.7461","url":null,"abstract":"<p>In topology optimization of transient problems, memory requirements and computational costs often become prohibitively large due to the backward-in-time adjoint equations. Common approaches such as the Checkpointing (CP) and Local-in-Time (LT) algorithms reduce memory requirements by dividing the temporal domain into intervals and by computing sensitivities on one interval at a time. The CP algorithm reduces memory by recomputing state solutions instead of storing them. This leads to a significant increase in computational cost. The LT algorithm introduces approximations in the adjoint solution to reduce memory requirements and leads to a minimal increase in computational effort. However, we show that convergence can be hampered using the LT algorithm due to errors in approximate adjoints. To reduce memory and/or computational time, we present two novel algorithms. The hybrid Checkpointing/Local-in-Time (CP/LT) algorithm improves the convergence behavior of the LT algorithm at the cost of an increased computational time but remains more efficient than the CP algorithm. The Parallel-Local-in-Time (PLT) algorithm reduces the computational time through a temporal parallelization in which state and adjoint equations are solved simultaneously on multiple intervals. State and adjoint fields converge concurrently with the design. The effectiveness of each approach is illustrated with two-dimensional density-based topology optimization problems involving transient thermal or flow physics. Compared to the other discussed algorithms, we found a significant decrease in computational time for the PLT algorithm. Moreover, we show that under certain conditions, due to the use of approximations in the LT and PLT algorithms, they exhibit a bias toward designs with short characteristic times. Finally, based on the required memory reduction, computational cost, and convergence behavior of optimization problems, guidelines are provided for selecting the appropriate algorithms.</p>","PeriodicalId":13699,"journal":{"name":"International Journal for Numerical Methods in Engineering","volume":"125 14","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/nme.7461","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jisheng Kou, Amgad Salama, Huangxin Chen, Shuyu Sun
Numerical modeling of immiscible two-phase flow in deformable porous media has become increasingly significant due to its applications in oil reservoir engineering, geotechnical engineering and many others. The coupling between two-phase flow and geomechanics gives rise to a major challenge to the development of physically consistent mathematical models and effective numerical methods. In this article, based on the concept of free energies and guided by the second law of thermodynamics, we derive a thermodynamically consistent mathematical model for immiscible two-phase flow in poro-viscoelastic media. The model uses the fluid and solid free energies to characterize the fluid capillarity and solid skeleton elasticity, so that it rigorously follows an energy dissipation law. The thermodynamically consistent formulation of the pore fluid pressure is naturally derived for the solid mechanical equilibrium equation. Additionally, the model ensures the mass conservation law for both fluids and solids. For numerical approximation of the model, we propose an energy stable and mass conservative numerical method. The method herein inherits the energy dissipation law through appropriate energy approaches and subtle treatments for the coupling between two phase saturations, the effective pore pressure and porosity. Using the locally conservative cell-centered finite difference methods on staggered grids with the upwind strategies for saturations and porosity, we construct the fully discrete scheme, which has the ability to conserve the masses of both fluids and solids as well as preserve the energy dissipation law at the fully discrete level. In particular, the proposed method is an unbiased algorithm, that is, treating the wetting phase, the non-wetting phase and the solid phase in the same way. Numerical results are also given to validate and verify the features of the proposed model and numerical method.
{"title":"Thermodynamically consistent numerical modeling of immiscible two-phase flow in poro-viscoelastic media","authors":"Jisheng Kou, Amgad Salama, Huangxin Chen, Shuyu Sun","doi":"10.1002/nme.7479","DOIUrl":"10.1002/nme.7479","url":null,"abstract":"<p>Numerical modeling of immiscible two-phase flow in deformable porous media has become increasingly significant due to its applications in oil reservoir engineering, geotechnical engineering and many others. The coupling between two-phase flow and geomechanics gives rise to a major challenge to the development of physically consistent mathematical models and effective numerical methods. In this article, based on the concept of free energies and guided by the second law of thermodynamics, we derive a thermodynamically consistent mathematical model for immiscible two-phase flow in poro-viscoelastic media. The model uses the fluid and solid free energies to characterize the fluid capillarity and solid skeleton elasticity, so that it rigorously follows an energy dissipation law. The thermodynamically consistent formulation of the pore fluid pressure is naturally derived for the solid mechanical equilibrium equation. Additionally, the model ensures the mass conservation law for both fluids and solids. For numerical approximation of the model, we propose an energy stable and mass conservative numerical method. The method herein inherits the energy dissipation law through appropriate energy approaches and subtle treatments for the coupling between two phase saturations, the effective pore pressure and porosity. Using the locally conservative cell-centered finite difference methods on staggered grids with the upwind strategies for saturations and porosity, we construct the fully discrete scheme, which has the ability to conserve the masses of both fluids and solids as well as preserve the energy dissipation law at the fully discrete level. In particular, the proposed method is an unbiased algorithm, that is, treating the wetting phase, the non-wetting phase and the solid phase in the same way. Numerical results are also given to validate and verify the features of the proposed model and numerical method.</p>","PeriodicalId":13699,"journal":{"name":"International Journal for Numerical Methods in Engineering","volume":"125 14","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeremy A. McCulloch, Skyler R. St. Pierre, Kevin Linka, Ellen Kuhl
Sparse regression and feature extraction are the cornerstones of knowledge discovery from massive data. Their goal is to discover interpretable and predictive models that provide simple relationships among scientific variables. While the statistical tools for model discovery are well established in the context of linear regression, their generalization to nonlinear regression in material modeling is highly problem-specific and insufficiently understood. Here we explore the potential of neural networks for automatic model discovery and induce sparsity by a hybrid approach that combines two strategies: regularization and physical constraints. We integrate the concept of Lp regularization for subset selection with constitutive neural networks that leverage our domain knowledge in kinematics and thermodynamics. We train our networks with both, synthetic and real data, and perform several thousand discovery runs to infer common guidelines and trends: L2 regularization or ridge regression is unsuitable for model discovery; L1 regularization or lasso promotes sparsity, but induces strong bias that may aggressively change the results; only L0 regularization allows us to transparently fine-tune the trade-off between interpretability and predictability, simplicity and accuracy, and bias and variance. With these insights, we demonstrate that Lp regularized constitutive neural networks can simultaneously discover both, interpretable models and physically meaningful parameters. We anticipate that our findings will generalize to alternative discovery techniques such as sparse and symbolic regression, and to other domains such as biology, chemistry, or medicine. Our ability to automatically discover material models from data could have tremendous applications in generative material design and open new opportunities to manipulate matter, alter properties of existing materials, and discover new materials with user-defined properties.
{"title":"On sparse regression, Lp-regularization, and automated model discovery","authors":"Jeremy A. McCulloch, Skyler R. St. Pierre, Kevin Linka, Ellen Kuhl","doi":"10.1002/nme.7481","DOIUrl":"10.1002/nme.7481","url":null,"abstract":"<p>Sparse regression and feature extraction are the cornerstones of knowledge discovery from massive data. Their goal is to discover interpretable and predictive models that provide simple relationships among scientific variables. While the statistical tools for model discovery are well established in the context of linear regression, their generalization to nonlinear regression in material modeling is highly problem-specific and insufficiently understood. Here we explore the potential of neural networks for automatic model discovery and induce sparsity by a hybrid approach that combines two strategies: regularization and physical constraints. We integrate the concept of <i>L</i><sub><i>p</i></sub> regularization for subset selection with constitutive neural networks that leverage our domain knowledge in kinematics and thermodynamics. We train our networks with both, synthetic and real data, and perform several thousand discovery runs to infer common guidelines and trends: <i>L</i><sub>2</sub> regularization or ridge regression is unsuitable for model discovery; <i>L</i><sub>1</sub> regularization or lasso promotes sparsity, but induces strong bias that may aggressively change the results; only <i>L</i><sub>0</sub> regularization allows us to transparently fine-tune the trade-off between interpretability and predictability, simplicity and accuracy, and bias and variance. With these insights, we demonstrate that <i>L</i><sub><i>p</i></sub> regularized constitutive neural networks can simultaneously discover both, interpretable models and physically meaningful parameters. We anticipate that our findings will generalize to alternative discovery techniques such as sparse and symbolic regression, and to other domains such as biology, chemistry, or medicine. Our ability to automatically discover material models from data could have tremendous applications in generative material design and open new opportunities to manipulate matter, alter properties of existing materials, and discover new materials with user-defined properties.</p>","PeriodicalId":13699,"journal":{"name":"International Journal for Numerical Methods in Engineering","volume":"125 14","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/nme.7481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We recently proposed an efficient method facilitating the parametric study of a finite element mechanical simulation as a postprocessing step, that is, without the need to run multiple simulations: the a posteriori finite element method (APFEM). APFEM only requires the knowledge of the vertices of the parameter space and is able to predict accurately how the degrees of freedom of a simulation, i.e., nodal displacements, and other outputs of interests, for example, element stress tensors, evolve when simulation parameters vary within their predefined ranges. In our previous work, these parameters were restricted to material properties and loading conditions. Here, we extend the APFEM to additionally account for changes in the original geometry. This is achieved by defining an intermediary reference frame whose mapping is defined stochastically in the weak form. Subsequent deformation is then reached by correcting for this stochastic variation in the reference frame through multiplicative decomposition of the deformation gradient tensor. The resulting framework is shown here to provide accurate mechanical predictions for relevant applications of increasing complexity: (i) quantifying the stress concentration factor of a plate under uniaxial loading with one and two elliptical holes of varying eccentricities, and (ii) performing the stochastic homogenisation of a composite plate with uncertain mechanical properties and geometry inclusion. This extension of APFEM completes our original approach to account parametrically for geometrical alterations, in addition to boundary conditions and material properties. The advantages of this approach in our original work in terms of stochastic prediction, uncertainty quantification, structural and material optimisation and Bayesian inferences are all naturally conserved.
最近,我们提出了一种高效方法,即后验有限元模拟法(APFEM),该方法无需运行多次模拟,只需后处理步骤即可对有限元机械模拟进行参数研究。APFEM 只需要知道参数空间的顶点,就能准确预测当模拟参数在预定范围内变化时,模拟的自由度(即节点位移)和其他相关输出(如元素应力张量)是如何演变的。在我们之前的工作中,这些参数仅限于材料属性和加载条件。在这里,我们对 APFEM 进行了扩展,以额外考虑原始几何形状的变化。这是通过定义一个中间参考框架来实现的,该框架的映射是以弱形式随机定义的。通过对变形梯度张量进行乘法分解,修正参考框架的随机变化,从而实现后续变形。本文显示,由此产生的框架可为复杂程度不断增加的相关应用提供精确的力学预测:(i) 量化带有一个和两个不同偏心率椭圆孔的板在单轴载荷下的应力集中系数,以及 (ii) 对具有不确定力学性能和几何包含的复合板进行随机均质化。APFEM 的这一扩展完善了我们的原始方法,除了边界条件和材料特性外,还从参数上考虑了几何变化。这种方法在随机预测、不确定性量化、结构和材料优化以及贝叶斯推论等方面的优势都自然而然地保留了下来。
{"title":"Extension of the a posteriori finite element method (APFEM) to geometrical alterations and application to stochastic homogenisation","authors":"Yanis Ammouche, Antoine Jérusalem","doi":"10.1002/nme.7482","DOIUrl":"10.1002/nme.7482","url":null,"abstract":"<p>We recently proposed an efficient method facilitating the parametric study of a finite element mechanical simulation as a postprocessing step, that is, without the need to run multiple simulations: the a posteriori finite element method (APFEM). APFEM only requires the knowledge of the vertices of the parameter space and is able to predict accurately how the degrees of freedom of a simulation, i.e., nodal displacements, and other outputs of interests, for example, element stress tensors, evolve when simulation parameters vary within their predefined ranges. In our previous work, these parameters were restricted to material properties and loading conditions. Here, we extend the APFEM to additionally account for changes in the original geometry. This is achieved by defining an intermediary reference frame whose mapping is defined stochastically in the weak form. Subsequent deformation is then reached by correcting for this stochastic variation in the reference frame through multiplicative decomposition of the deformation gradient tensor. The resulting framework is shown here to provide accurate mechanical predictions for relevant applications of increasing complexity: (i) quantifying the stress concentration factor of a plate under uniaxial loading with one and two elliptical holes of varying eccentricities, and (ii) performing the stochastic homogenisation of a composite plate with uncertain mechanical properties and geometry inclusion. This extension of APFEM completes our original approach to account parametrically for geometrical alterations, in addition to boundary conditions and material properties. The advantages of this approach in our original work in terms of stochastic prediction, uncertainty quantification, structural and material optimisation and Bayesian inferences are all naturally conserved.</p>","PeriodicalId":13699,"journal":{"name":"International Journal for Numerical Methods in Engineering","volume":"125 14","pages":""},"PeriodicalIF":2.9,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/nme.7482","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140588656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p>We propose a new class of high-order time-marching schemes with dissipation control and unconditional stability for parabolic equations. High-order time integrators can deliver the optimal performance of highly accurate and robust spatial discretizations such as isogeometric analysis. The generalized-<span></span><math>