Pub Date : 2024-10-22DOI: 10.1016/j.cma.2024.117447
We propose an ensemble score filter (EnSF) for solving high-dimensional nonlinear filtering problems with superior accuracy. A major drawback of existing filtering methods, e.g., particle filters or ensemble Kalman filters, is the low accuracy in handling high-dimensional and highly nonlinear problems. EnSF addresses this challenge by exploiting the score-based diffusion model, defined in a pseudo-temporal domain, to characterize the evolution of the filtering density. EnSF stores the information of the recursively updated filtering density function in the score function, instead of storing the information in a set of finite Monte Carlo samples (used in particle filters and ensemble Kalman filters). Unlike existing diffusion models that train neural networks to approximate the score function, we develop a training-free score estimation method that uses a mini-batch-based Monte Carlo estimator to directly approximate the score function at any pseudo-spatial–temporal location, which provides sufficient accuracy in solving high-dimensional nonlinear problems while also saving a tremendous amount of time spent on training neural networks. High-dimensional Lorenz-96 systems are used to demonstrate the performance of our method. EnSF provides superior performance, compared with the state-of-the-art Local Ensemble Transform Kalman Filter, in reliably and efficiently tracking extremely high-dimensional Lorenz systems (up to 1,000,000 dimensions) with highly nonlinear observation processes.
{"title":"An ensemble score filter for tracking high-dimensional nonlinear dynamical systems","authors":"","doi":"10.1016/j.cma.2024.117447","DOIUrl":"10.1016/j.cma.2024.117447","url":null,"abstract":"<div><div>We propose an ensemble score filter (EnSF) for solving high-dimensional nonlinear filtering problems with superior accuracy. A major drawback of existing filtering methods, e.g., particle filters or ensemble Kalman filters, is the low accuracy in handling high-dimensional and highly nonlinear problems. EnSF addresses this challenge by exploiting the score-based diffusion model, defined in a pseudo-temporal domain, to characterize the evolution of the filtering density. EnSF stores the information of the recursively updated filtering density function in the score function, instead of storing the information in a set of finite Monte Carlo samples (used in particle filters and ensemble Kalman filters). Unlike existing diffusion models that train neural networks to approximate the score function, we develop a training-free score estimation method that uses a mini-batch-based Monte Carlo estimator to directly approximate the score function at any pseudo-spatial–temporal location, which provides sufficient accuracy in solving high-dimensional nonlinear problems while also saving a tremendous amount of time spent on training neural networks. High-dimensional Lorenz-96 systems are used to demonstrate the performance of our method. EnSF provides superior performance, compared with the state-of-the-art Local Ensemble Transform Kalman Filter, in reliably and efficiently tracking extremely high-dimensional Lorenz systems (up to 1,000,000 dimensions) with highly nonlinear observation processes.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.cma.2024.117426
Point cloud representations of three-dimensional objects have remained indispensable across a diverse array of applications, given their ability to represent complex real-world geometry with just a set of points. The high fidelity and versatility of point clouds have been utilized in directly performing numerical analysis for engineering applications, bypassing the labor-intensive and time-consuming tasks of creating analysis-suitable CAD models. However, point clouds exhibit various levels of quality, often containing defects such as holes, noise, and sparse regions, leading to sub-optimal geometry representation that can impact the stability and accuracy of any analysis study. This paper aims to overcome such challenges by proposing a novel method that expands upon our recently developed direct point cloud-to-CFD approach based on immersogeometric analysis. The proposed method features a mesh-driven resampling technique to fill any unintended gaps and regularize the point cloud, making it suitable for CFD analysis. Additionally, ghost penalty stabilization is employed for incompressible flow to improve the conditioning and stability compromised by the small cut elements in immersed methods. The developed method is validated against standard benchmark geometries and real-world point clouds obtained in-house with photogrammetry. Results demonstrate the proposed framework’s robustness in facilitating CFD simulations directly on point clouds of varying quality, underscoring its potential for practical applications in analyzing real-world structures.
{"title":"Mesh-driven resampling and regularization for robust point cloud-based flow analysis directly on scanned objects","authors":"","doi":"10.1016/j.cma.2024.117426","DOIUrl":"10.1016/j.cma.2024.117426","url":null,"abstract":"<div><div>Point cloud representations of three-dimensional objects have remained indispensable across a diverse array of applications, given their ability to represent complex real-world geometry with just a set of points. The high fidelity and versatility of point clouds have been utilized in directly performing numerical analysis for engineering applications, bypassing the labor-intensive and time-consuming tasks of creating analysis-suitable CAD models. However, point clouds exhibit various levels of quality, often containing defects such as holes, noise, and sparse regions, leading to sub-optimal geometry representation that can impact the stability and accuracy of any analysis study. This paper aims to overcome such challenges by proposing a novel method that expands upon our recently developed direct point cloud-to-CFD approach based on immersogeometric analysis. The proposed method features a mesh-driven resampling technique to fill any unintended gaps and regularize the point cloud, making it suitable for CFD analysis. Additionally, ghost penalty stabilization is employed for incompressible flow to improve the conditioning and stability compromised by the small cut elements in immersed methods. The developed method is validated against standard benchmark geometries and real-world point clouds obtained in-house with photogrammetry. Results demonstrate the proposed framework’s robustness in facilitating CFD simulations directly on point clouds of varying quality, underscoring its potential for practical applications in analyzing real-world structures.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.cma.2024.117453
PLoM (Probabilistic Learning on Manifolds) is a method introduced in 2016 for handling small training datasets by projecting an Itô equation from a stochastic dissipative Hamiltonian dynamical system, acting as the MCMC generator, for which the KDE-estimated probability measure with the training dataset is the invariant measure. PLoM performs a projection on a reduced-order vector basis related to the training dataset, using the diffusion maps (DMAPS) basis constructed with a time-independent isotropic kernel. In this paper, we propose a new ISDE projection vector basis built from a transient anisotropic kernel, providing an alternative to the DMAPS basis to improve statistical surrogates for stochastic manifolds with heterogeneous data. The construction ensures that for times near the initial time, the DMAPS basis coincides with the transient basis. For larger times, the differences between the two bases are characterized by the angle of their spanned vector subspaces. The optimal instant yielding the optimal transient basis is determined using an estimation of mutual information from Information Theory, which is normalized by the entropy estimation to account for the effects of the number of realizations used in the estimations. Consequently, this new vector basis better represents statistical dependencies in the learned probability measure for any dimension. Three applications with varying levels of statistical complexity and data heterogeneity validate the proposed theory, showing that the transient anisotropic kernel improves the learned probability measure.
PLoM (Probabilistic Learning on Manifolds)是 2016 年推出的一种方法,通过投影随机耗散哈密顿动力系统的伊托方程来处理小型训练数据集,作为 MCMC 生成器,训练数据集的 KDE 估计概率度量是其不变度量。PLoM 使用与时间无关的各向同性核构建的扩散图 (DMAPS) 基础,在与训练数据集相关的降阶矢量基础上执行投影。在本文中,我们提出了一种由瞬态各向异性核构建的新 ISDE 投影矢量基础,提供了 DMAPS 基础的替代方案,以改进具有异质数据的随机流形的统计代用。这种构造确保了在初始时间附近,DMAPS 基础与瞬态基础相吻合。对于更长的时间,两个基础之间的差异是由它们所跨向量子空间的角度决定的。产生最佳瞬态基础的最佳瞬间是通过信息论中的互信息估计来确定的,该估计由熵估计归一化,以考虑估计中使用的实现次数的影响。因此,这种新的矢量基础能更好地代表任何维度的已学概率度量中的统计依赖性。三个具有不同统计复杂性和数据异质性的应用验证了所提出的理论,表明瞬态各向异性内核改善了所学概率度量。
{"title":"Transient anisotropic kernel for probabilistic learning on manifolds","authors":"","doi":"10.1016/j.cma.2024.117453","DOIUrl":"10.1016/j.cma.2024.117453","url":null,"abstract":"<div><div>PLoM (Probabilistic Learning on Manifolds) is a method introduced in 2016 for handling small training datasets by projecting an Itô equation from a stochastic dissipative Hamiltonian dynamical system, acting as the MCMC generator, for which the KDE-estimated probability measure with the training dataset is the invariant measure. PLoM performs a projection on a reduced-order vector basis related to the training dataset, using the diffusion maps (DMAPS) basis constructed with a time-independent isotropic kernel. In this paper, we propose a new ISDE projection vector basis built from a transient anisotropic kernel, providing an alternative to the DMAPS basis to improve statistical surrogates for stochastic manifolds with heterogeneous data. The construction ensures that for times near the initial time, the DMAPS basis coincides with the transient basis. For larger times, the differences between the two bases are characterized by the angle of their spanned vector subspaces. The optimal instant yielding the optimal transient basis is determined using an estimation of mutual information from Information Theory, which is normalized by the entropy estimation to account for the effects of the number of realizations used in the estimations. Consequently, this new vector basis better represents statistical dependencies in the learned probability measure for any dimension. Three applications with varying levels of statistical complexity and data heterogeneity validate the proposed theory, showing that the transient anisotropic kernel improves the learned probability measure.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1016/j.cma.2024.117451
Decomposition algorithms and surrogate model methods are frequently employed to address large-scale, intricate optimization challenges. However, the iterative resolution phase inherent to decomposition algorithms can potentially alter the background vector, leading to the repetitive evaluation of samples across disparate iteration cycles. This phenomenon significantly diminishes the computational efficiency of optimization. Accordingly, a novel approach, designated the Sample Point Transformation Algorithm (SPTA), is put forth in this paper as a means of enhancing efficiency through a process of mathematical deduction. The mathematical deduction reveals that the difference between sample points in each iteration loop is a simple function related to the inter-group dependent variables. Consequently, the SPTA method achieves the comprehensive transformation of the sample set by establishing a surrogate model of the difference between the sample sets of two cycles with a limited number of sample points, as opposed to conducting a substantial number of repeated samplings. This SPTA is employed to substitute the most time-consuming step of direct calculation in the classical optimization process. To validate the calculation efficiency, a series of numerical examples were conducted, demonstrating an improvement of approximately 75 % while maintaining optimal accuracy. This illustrates the advantage of the SPTA in addressing large-scale and complex optimization problems.
{"title":"High-efficient sample point transform algorithm for large-scale complex optimization","authors":"","doi":"10.1016/j.cma.2024.117451","DOIUrl":"10.1016/j.cma.2024.117451","url":null,"abstract":"<div><div>Decomposition algorithms and surrogate model methods are frequently employed to address large-scale, intricate optimization challenges. However, the iterative resolution phase inherent to decomposition algorithms can potentially alter the background vector, leading to the repetitive evaluation of samples across disparate iteration cycles. This phenomenon significantly diminishes the computational efficiency of optimization. Accordingly, a novel approach, designated the Sample Point Transformation Algorithm (SPTA), is put forth in this paper as a means of enhancing efficiency through a process of mathematical deduction. The mathematical deduction reveals that the difference between sample points in each iteration loop is a simple function related to the inter-group dependent variables. Consequently, the SPTA method achieves the comprehensive transformation of the sample set by establishing a surrogate model of the difference between the sample sets of two cycles with a limited number of sample points, as opposed to conducting a substantial number of repeated samplings. This SPTA is employed to substitute the most time-consuming step of direct calculation in the classical optimization process. To validate the calculation efficiency, a series of numerical examples were conducted, demonstrating an improvement of approximately 75 % while maintaining optimal accuracy. This illustrates the advantage of the SPTA in addressing large-scale and complex optimization problems.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1016/j.cma.2024.117448
Fractional and tempered fractional partial differential equations (PDEs) are effective models of long-range interactions, anomalous diffusion, and non-local effects. Traditional numerical methods for these problems are mesh-based, thus struggling with the curse of dimensionality (CoD). Physics-informed neural networks (PINNs) offer a promising solution due to their universal approximation, generalization ability, and mesh-free training. In principle, Monte Carlo fractional PINN (MC-fPINN) estimates fractional derivatives using Monte Carlo methods and thus could lift CoD. However, this may cause significant variance and errors, hence affecting convergence; in addition, MC-fPINN is sensitive to hyperparameters. In general, numerical methods and specifically PINNs for tempered fractional PDEs are under-developed. Herein, we extend MC-fPINN to tempered fractional PDEs to address these issues, resulting in the Monte Carlo tempered fractional PINN (MC-tfPINN). To reduce possible high variance and errors from Monte Carlo sampling, we replace the one-dimensional (1D) Monte Carlo with 1D Gaussian quadrature, applicable to both MC-fPINN and MC-tfPINN. We validate our methods on various forward and inverse problems of fractional and tempered fractional PDEs, scaling up to 100,000 dimensions. Our improved MC-fPINN/MC-tfPINN using quadrature consistently outperforms the original versions in accuracy and convergence speed in very high dimensions. Code is available at https://github.com/zheyuanhu01/Tempered_Fractional_PINN.
{"title":"Tackling the curse of dimensionality in fractional and tempered fractional PDEs with physics-informed neural networks","authors":"","doi":"10.1016/j.cma.2024.117448","DOIUrl":"10.1016/j.cma.2024.117448","url":null,"abstract":"<div><div>Fractional and tempered fractional partial differential equations (PDEs) are effective models of long-range interactions, anomalous diffusion, and non-local effects. Traditional numerical methods for these problems are mesh-based, thus struggling with the curse of dimensionality (CoD). Physics-informed neural networks (PINNs) offer a promising solution due to their universal approximation, generalization ability, and mesh-free training. In principle, Monte Carlo fractional PINN (MC-fPINN) estimates fractional derivatives using Monte Carlo methods and thus could lift CoD. However, this may cause significant variance and errors, hence affecting convergence; in addition, MC-fPINN is sensitive to hyperparameters. In general, numerical methods and specifically PINNs for tempered fractional PDEs are under-developed. Herein, we extend MC-fPINN to tempered fractional PDEs to address these issues, resulting in the Monte Carlo tempered fractional PINN (MC-tfPINN). To reduce possible high variance and errors from Monte Carlo sampling, we replace the one-dimensional (1D) Monte Carlo with 1D Gaussian quadrature, applicable to both MC-fPINN and MC-tfPINN. We validate our methods on various forward and inverse problems of fractional and tempered fractional PDEs, scaling up to 100,000 dimensions. Our improved MC-fPINN/MC-tfPINN using quadrature consistently outperforms the original versions in accuracy and convergence speed in very high dimensions. Code is available at <span><span>https://github.com/zheyuanhu01/Tempered_Fractional_PINN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1016/j.cma.2024.117449
The present study models the multi-material topology optimization problems as the multi-valued integer programming (MVIP) or named as combinatorial optimization. By extending classical convex analysis and convex programming to discrete point-set functions, the discrete convex analysis and discrete steepest descent (DSD) algorithm are introduced. To overcome combinatorial complexity of the DSD algorithm, we employ the sequential approximate integer programming (SAIP) to explicitly and linearly approximate the implicit objective and constraint functions. Considering the multiple potential changed directions for multi-valued design variables, the random discrete steepest descent (RDSD) algorithm is proposed, where a random strategy is implemented to select a definitive direction of change. To analytically calculate multi-material discrete variable sensitivities, topological derivatives with material contrast is applied. In all, the MVIP is finally transferred as the linear 0–1 programming that can be efficiently solved by the canonical relaxation algorithm (CRA). Explicit nonlinear examples demonstrate that the RDSD algorithm owns nearly three orders of magnitude improvement compared with the commercial software (GUROBI). The proposed approach, without using any continuous variable relaxation and interpolation penalization schemes, successfully solves the minimum compliance problem, strength-related problem, and frequency-related optimization problems. Given the algorithm efficiency, mathematical generality and merits over other algorithms, the proposed RDSD algorithm is meaningful for other structural and topology optimization problems involving multi-valued discrete design variables.
{"title":"Approach for multi-valued integer programming in multi-material topology optimization: Random discrete steepest descent (RDSD) algorithm","authors":"","doi":"10.1016/j.cma.2024.117449","DOIUrl":"10.1016/j.cma.2024.117449","url":null,"abstract":"<div><div>The present study models the multi-material topology optimization problems as the multi-valued integer programming (MVIP) or named as combinatorial optimization. By extending classical convex analysis and convex programming to discrete point-set functions, the discrete convex analysis and discrete steepest descent (DSD) algorithm are introduced. To overcome combinatorial complexity of the DSD algorithm, we employ the sequential approximate integer programming (SAIP) to explicitly and linearly approximate the implicit objective and constraint functions. Considering the multiple potential changed directions for multi-valued design variables, the random discrete steepest descent (RDSD) algorithm is proposed, where a random strategy is implemented to select a definitive direction of change. To analytically calculate multi-material discrete variable sensitivities, topological derivatives with material contrast is applied. In all, the MVIP is finally transferred as the linear 0–1 programming that can be efficiently solved by the canonical relaxation algorithm (CRA). Explicit nonlinear examples demonstrate that the RDSD algorithm owns nearly three orders of magnitude improvement compared with the commercial software (GUROBI). The proposed approach, without using any continuous variable relaxation and interpolation penalization schemes, successfully solves the minimum compliance problem, strength-related problem, and frequency-related optimization problems. Given the algorithm efficiency, mathematical generality and merits over other algorithms, the proposed RDSD algorithm is meaningful for other structural and topology optimization problems involving multi-valued discrete design variables.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1016/j.cma.2024.117446
A novel, generalizable Artificial Intelligence (AI)-driven technique, termed Deep Learning-Driven Domain Decomposition (DLD), is introduced for simulating two-dimensional linear elasticity problems with arbitrary geometries and boundary conditions (BCs). The DLD framework leverages trained AI models to predict the displacement field within small subdomains, each characterized by varying geometries and BCs. To enforce continuity across the entire domain, the overlapping Schwarz domain decomposition method (DDM) iteratively updates the BCs of each subdomain, thus approximating the overall solution. After evaluating multiple model architectures, the Fourier Neural Operator (FNO) was selected as the AI engine for the DLD method, owing to its data efficiency and high accuracy. We also present a framework that utilizes geometry reconstruction and automated meshing algorithms to generate millions of training data points from high-fidelity finite element (FE) simulations. Several case studies are provided to demonstrate the DLD algorithm’s ability to accurately predict displacement fields in problems involving complex geometries, diverse BCs, and material properties.
本文介绍了一种新颖的、可推广的人工智能(AI)驱动技术,称为深度学习驱动的领域分解(DLD3),用于模拟具有任意几何形状和边界条件(BC)的二维线性弹性问题。DLD3 框架利用训练有素的人工智能模型来预测小型子域内的位移场,每个子域的几何形状和边界条件各不相同。为了确保整个域的连续性,重叠施瓦茨域分解法(DDM)会迭代更新每个子域的边界条件,从而逼近整体解决方案。在对多种模型架构进行评估后,我们选择了傅立叶神经运算器(FNO)作为 DLD3 方法的人工智能引擎,因为它具有数据效率高、精度高的特点。我们还介绍了一个框架,该框架利用几何重构和自动网格划分算法,从高保真有限元(FE)模拟中生成数百万个训练数据点。我们提供了几个案例研究,以证明 DLD3 算法在涉及复杂几何形状、不同 BC 和材料属性的问题中准确预测位移场的能力。
{"title":"Deep learning-driven domain decomposition (DLD3): A generalizable AI-driven framework for structural analysis","authors":"","doi":"10.1016/j.cma.2024.117446","DOIUrl":"10.1016/j.cma.2024.117446","url":null,"abstract":"<div><div>A novel, generalizable Artificial Intelligence (AI)-driven technique, termed Deep Learning-Driven Domain Decomposition (DLD<span><math><msup><mrow></mrow><mrow><mn>3</mn></mrow></msup></math></span>), is introduced for simulating two-dimensional linear elasticity problems with arbitrary geometries and boundary conditions (BCs). The DLD<span><math><msup><mrow></mrow><mrow><mn>3</mn></mrow></msup></math></span> framework leverages trained AI models to predict the displacement field within small subdomains, each characterized by varying geometries and BCs. To enforce continuity across the entire domain, the overlapping Schwarz domain decomposition method (DDM) iteratively updates the BCs of each subdomain, thus approximating the overall solution. After evaluating multiple model architectures, the Fourier Neural Operator (FNO) was selected as the AI engine for the DLD<span><math><msup><mrow></mrow><mrow><mn>3</mn></mrow></msup></math></span> method, owing to its data efficiency and high accuracy. We also present a framework that utilizes geometry reconstruction and automated meshing algorithms to generate millions of training data points from high-fidelity finite element (FE) simulations. Several case studies are provided to demonstrate the DLD<span><math><msup><mrow></mrow><mrow><mn>3</mn></mrow></msup></math></span> algorithm’s ability to accurately predict displacement fields in problems involving complex geometries, diverse BCs, and material properties.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1016/j.cma.2024.117440
Energy-absorbing materials and structures are widely applied in industrial areas. Presently, design methods of energy-absorbing metamaterials mainly rely on empirical or bio-inspired configurations. Inspired by AI-generated content, this paper proposes a novel inverse design framework for energy-absorbing metamaterial using diffusion model called DiffMat, which can be customized to generate microstructures given desired stress–strain curves. DiffMat learns the conditional distribution of microstructure given mechanical properties and can realize the one-to-many mapping from properties to geometries. Numerical simulations and experimental validations demonstrate the capability of DiffMat to generate a diverse array of microstructures based on given mechanical properties. This indicates the validity and high accuracy of DiffMat in generating metamaterials that meet the desired mechanical properties. The successful demonstration of the proposed inverse design framework highlights its potential to revolutionize the development of energy-absorbing metamaterials and underscores the broader impact of integrating AI-inspired methodologies into metamaterial design and engineering.
{"title":"DiffMat: Data-driven inverse design of energy-absorbing metamaterials using diffusion model","authors":"","doi":"10.1016/j.cma.2024.117440","DOIUrl":"10.1016/j.cma.2024.117440","url":null,"abstract":"<div><div>Energy-absorbing materials and structures are widely applied in industrial areas. Presently, design methods of energy-absorbing metamaterials mainly rely on empirical or bio-inspired configurations. Inspired by AI-generated content, this paper proposes a novel inverse design framework for energy-absorbing metamaterial using diffusion model called DiffMat, which can be customized to generate microstructures given desired stress–strain curves. DiffMat learns the conditional distribution of microstructure given mechanical properties and can realize the one-to-many mapping from properties to geometries. Numerical simulations and experimental validations demonstrate the capability of DiffMat to generate a diverse array of microstructures based on given mechanical properties. This indicates the validity and high accuracy of DiffMat in generating metamaterials that meet the desired mechanical properties. The successful demonstration of the proposed inverse design framework highlights its potential to revolutionize the development of energy-absorbing metamaterials and underscores the broader impact of integrating AI-inspired methodologies into metamaterial design and engineering.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1016/j.cma.2024.117456
In the present study, a new exploration of the mesoscopic structure is proposed for the nonlocal macro‑meso-scale consistent damage (NMMD) model, and the definition from mesoscopic damage to macroscopic damage in the original NMMD model is expanded. In the proposed model, material points are divided into two types: macroscopic and mesoscopic. For each macroscopic material point, there are mesoscopic material points within its influence domain, and every two different mesoscopic material points form a material point pair. The macroscopic damage at a macroscopic material point is also evaluated as the weighted average of mesoscale damage over material point pairs in the influence domain. However, compared with the original NMMD model, the mesoscale damage of material point pairs is determined by the motion of mesoscopic material points, rather than macroscopic material points. The macroscopic material points in the proposed model only represent the nonlocal effect and the macroscopic damage. Moreover, the shape of the influence domain and the arrangement of material point pairs are arbitrary and not fixed, i.e., the unified mesoscopic structure is abstract. To verify the proposed model, a specific mesoscopic structure is generated for quasi-brittle materials without considering the randomness of material properties. In this mesoscopic structure, the shape of the influence domain is a circle, and the mesoscopic material points are generated by the tangent sphere method. The numerical results indicate that the proposed model can accurately capture the crack patterns of quasi-brittle materials and exhibits excellent numerical robustness. Meanwhile, through a mode-I failure example, it is demonstrated that the computational efficiency of the proposed model is not lower than the original NMMD model. More importantly, the framework of mesoscopic structure modeling provides a new feasible approach for the extension of other models, e.g., virtual internal bond model and peridynamics. The urgent work within the NMMD model framework is to extend the proposed model to anisotropic, composite materials and dynamic crack simulation of large structures in the future.
{"title":"A new exploration of mesoscopic structure in the nonlocal macro-meso-scale consistent damage model for quasi-brittle materials","authors":"","doi":"10.1016/j.cma.2024.117456","DOIUrl":"10.1016/j.cma.2024.117456","url":null,"abstract":"<div><div>In the present study, a new exploration of the mesoscopic structure is proposed for the nonlocal macro‑meso-scale consistent damage (NMMD) model, and the definition from mesoscopic damage to macroscopic damage in the original NMMD model is expanded. In the proposed model, material points are divided into two types: macroscopic and mesoscopic. For each macroscopic material point, there are mesoscopic material points within its influence domain, and every two different mesoscopic material points form a material point pair. The macroscopic damage at a macroscopic material point is also evaluated as the weighted average of mesoscale damage over material point pairs in the influence domain. However, compared with the original NMMD model, the mesoscale damage of material point pairs is determined by the motion of mesoscopic material points, rather than macroscopic material points. The macroscopic material points in the proposed model only represent the nonlocal effect and the macroscopic damage. Moreover, the shape of the influence domain and the arrangement of material point pairs are arbitrary and not fixed, i.e., the unified mesoscopic structure is abstract. To verify the proposed model, a specific mesoscopic structure is generated for quasi-brittle materials without considering the randomness of material properties. In this mesoscopic structure, the shape of the influence domain is a circle, and the mesoscopic material points are generated by the tangent sphere method. The numerical results indicate that the proposed model can accurately capture the crack patterns of quasi-brittle materials and exhibits excellent numerical robustness. Meanwhile, through a mode-I failure example, it is demonstrated that the computational efficiency of the proposed model is not lower than the original NMMD model. More importantly, the framework of mesoscopic structure modeling provides a new feasible approach for the extension of other models, e.g., virtual internal bond model and peridynamics. The urgent work within the NMMD model framework is to extend the proposed model to anisotropic, composite materials and dynamic crack simulation of large structures in the future.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1016/j.cma.2024.117438
Fracture resistance of structures consisting of brittle materials is significantly important in engineering practice. In this work, we explore the application of peridynamics (PD) in the optimization of structures against brittle fracture. A fracture resistance topology optimization scheme under the PD-based analysis framework is proposed, where two fracture-based strategies are adopted to improve the structural fracture behavior. The first one sets the conventional fracture energy as a constraint. While the second constraint is the bond stretch established on the unique concept “bond” of the PD framework, which smoothly transfers the energy-based fracture resistance control to an intuitive and mathematically tractable geometric expression. The topology optimization is carried out under the SIMP framework, where densities are assigned to the bonds via material points to represent the topology changes and crack generation. Numerical examples and experiments demonstrate that the proposed strategies can guarantee the safety of the optimized structure against the occurrence of fracture failure.
{"title":"Topology optimization of structures guarding against brittle fracture via peridynamics-based SIMP approach","authors":"","doi":"10.1016/j.cma.2024.117438","DOIUrl":"10.1016/j.cma.2024.117438","url":null,"abstract":"<div><div>Fracture resistance of structures consisting of brittle materials is significantly important in engineering practice. In this work, we explore the application of peridynamics (PD) in the optimization of structures against brittle fracture. A fracture resistance topology optimization scheme under the PD-based analysis framework is proposed, where two fracture-based strategies are adopted to improve the structural fracture behavior. The first one sets the conventional fracture energy as a constraint. While the second constraint is the bond stretch established on the unique concept “bond” of the PD framework, which smoothly transfers the energy-based fracture resistance control to an intuitive and mathematically tractable geometric expression. The topology optimization is carried out under the SIMP framework, where densities are assigned to the bonds via material points to represent the topology changes and crack generation. Numerical examples and experiments demonstrate that the proposed strategies can guarantee the safety of the optimized structure against the occurrence of fracture failure.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}